Run Large Language Models Locally

While most people use tools like ChatGPT, many users prefer running LLM models locally for data privacy, cost efficiency, and offline accessibility (and just because you can).

Ollama is a tool designed to facilitate running LLMs on your local machine, making it accessible and easy.

Getting Started with Ollama

Ollama supports macOS, Windows, and Linux, providing an easy setup process for each platform. Here's how you can get started:

macOS

For macOS users, download the application directly: Download for macOS

Windows

Windows users can access the preview version: Download for Windows

Linux

Linux users can install Ollama via a simple shell command:

curl -fsSL https://ollama.com/install.sh | sh

Alternatively, you can follow the manual installation instructions.

Docker

If you prefer using Docker, the official Ollama Docker image is available on Docker Hub:

docker pull ollama/ollama

Running Models

Once installed, running models with Ollama is simple.

As an example, to run the Llama 3 model to chat with, simply use the command:

ollama run llama3

Ollama supports a lot of models. For a full list, check out the docs here, where you'll find the commands to install whatever model you like.

Useful Commands

List models on your computer:

ollama list

Remove a model:

ollama rm llama3

Copy a model:

ollama cp llama3 my-model
AiLlm
Avatar for Niall Maher

Written by Niall Maher

Founder of Codú - The web developer community! I've worked in nearly every corner of technology businesses: Lead Developer, Software Architect, Product Manager, CTO, and now happily a Founder.

Loading

Fetching comments

Hey! 👋

Got something to say?

or to leave a comment.