First, it offers much more privacy. When using a Large Language Model LLM in the cloud, you never know if your queries or results are being tracked or even saved by a third party. Also, using an LLM locally saves energy. The amount of energy required to use a cloud based LLM is growing and could be a problem in the future. Also: How to run DeepSeek AI locally to protect your privacy – 2 easy ways Ollama is a tool that allows you to run different LLMs. I've been using it for some...
