Ollama

Ollama allows you to run open-source large language models, such as Llama 2, LLaVA, Mistral, Orca etc locally.

Preparation​

Install Ollama and download models.

Configuration

We're using the gemma:2b model in this example.

API Key is required but ignored, so you can input any string.

Set up environment variables for CORS

Run the following commands so that Ollama allows connection from AIChatOne

launchctl setenv OLLAMA_HOST "0.0.0.0"
launchctl setenv OLLAMA_ORIGINS "*"

After that, you need to restart Ollama for the changes to take effect.

Last updated