# Ollama

[Ollama](https://github.com/ollama/ollama) allows you to run open-source large language models, such as Llama 2, LLaVA, Mistral, Orca etc locally.

## Preparation[​](https://doc.chathub.gg/custom-chatbots/openai#preparation) <a href="#preparation" id="preparation"></a>

Install Ollama and download models.

## Configuration <a href="#configuration" id="configuration"></a>

<figure><img src="https://3481753452-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FzYU2cjsYvY3seDzkySsN%2Fuploads%2FndyzGmev7A4XUQV0HTzo%2Fimage.png?alt=media&#x26;token=33c7ee80-6854-4a4d-9899-6037198afae9" alt=""><figcaption></figcaption></figure>

We're using the `gemma:2b` model in this example.

API Key is required but ignored, so you can input any string.

## Set up environment variables for CORS

Run the following commands so that Ollama allows connection from AIChatOne

```
launchctl setenv OLLAMA_HOST "0.0.0.0"
launchctl setenv OLLAMA_ORIGINS "*"
```

After that, you need to restart Ollama for the changes to take effect.

{% hint style="info" %}
For more details: <https://medium.com/dcoderai/how-to-handle-cors-settings-in-ollama-a-comprehensive-guide-ee2a5a1beef0>
{% endhint %}
