- How to encrypt any email - in Outlook, Gmail, and other popular services
- The Best of CES 2025 awards are in, as selected by ZDNET and the rest of CNET Group
- The brightest flashlights of 2025: Expert recommended
- The Linux Foundation launches an initiative to support open-source Chromium-based browsers
- Space Bears Ransomware: What You Need To Know
How to install an LLM on MacOS (and why you should)
Do you like the concept of AI but dislike the idea that a third party could have access to your content and data for the training of their LLMs? I, for one, avoid any instance of AI that could have access to the novels I write, which is why I stopped using Google Drive for that purpose and do not use a word processor with built-in AI.
If that sounds like your stance on the technology (but you still wish you could use the tool), let me introduce you to Ollama.
Ollama is an LLM you can install on your local machine and use it from there. This way, you don’t have to worry about anyone using your content, queries, or information for other purposes.
Also: What is an AI PC exactly? And should you buy one in 2025?
Sounds hard, doesn’t it?
It’s not.
It’s actually easier than you might think.
I will say this: What you will end up with is an AI that you access via the command line. There is a GUI that can be installed, but it’s web-based, and most of the other GUIs are either quite challenging to install or shouldn’t be trusted. Don’t worry. If you can use a chat app, you can use the Ollama terminal.
Also: How I easily added AI to my favorite Microsoft Office alternative
Let’s get this installed.
How to install Ollama on your MacOS device
What you’ll need: To install Ollama, you’ll need an Apple device running MacOS 11 (Big Sur) or later. That’s it.
Once the installer has downloaded, double-click the file and, when prompted, click Open. You’ll then be informed that Ollama works best when run from the Applications directory. Click “Move to Applications” in that pop-up and wait for the install wizard to open.
On the first page of the wizard, click Next and then click Install. You’ll be prompted for your user password. Type the password and click OK. When the installation is finished, click Finish to dismiss the wizard.
That’s it. Ollama is now installed.
How to use Ollama
1. Run Ollama
Open the terminal app and then issue the command:
ollama run llama3.2
This will pull down the latest Ollama LLM. Depending on the speed of your network connection, this could take anywhere from 1 to 5 minutes. When it finishes, the terminal prompt will change to this:
>>> Send a message (/? for help)
2. Run your first query
You can now run your first query with Ollama. Type something like:
What are the benefits of artificial intelligence?
Ollama will then print out the results of your query.
3. Exiting Ollama
When you’re done using Ollama, you can exit the app with the /bye command. Whenever you want to start a new session, simply open the terminal app and type ollama run llama3.2.
You can also download other LLMs for Ollama. To view what’s available, take a look at the Ollama Library. When you find an LLM you want to install, run the ollama run MODEL_NAME command (where MODEL_NAME is the name of the LLM you want to install). Keep in mind that the larger the model, the more space and resources it will consume. For example, the llama3.2 model is only 2.0 GB, whereas the llama3.3 model is a whopping 43 GB, so choose carefully.
Also: The obvious reason why I’m not sold on smartphone AI features yet (and I’m not alone)
And that’s all there is to installing Ollama on your MacOS device. I’ll be on the lookout for a suitable GUI that can be easily installed and trusted, so keep coming back.