# Ollama

[Ollama](https://github.com/ollama/ollama) allows you to run open-source large language models, such as Llama 2, LLaVA, Mistral, Orca etc locally.

## Preparation[​](https://doc.chathub.gg/custom-chatbots/openai#preparation) <a href="#preparation" id="preparation"></a>

Install Ollama and download models.

## Configuration <a href="#configuration" id="configuration"></a>

<figure><img src="/files/jJpAYbldS7hgUSApSzbt" alt=""><figcaption></figcaption></figure>

We're using the `gemma:2b` model in this example.

API Key is required but ignored, so you can input any string.

## Set up environment variables for CORS

Run the following commands so that Ollama allows connection from AIChatOne

```
launchctl setenv OLLAMA_HOST "0.0.0.0"
launchctl setenv OLLAMA_ORIGINS "*"
```

After that, you need to restart Ollama for the changes to take effect.

{% hint style="info" %}
For more details: <https://medium.com/dcoderai/how-to-handle-cors-settings-in-ollama-a-comprehensive-guide-ee2a5a1beef0>
{% endhint %}


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.aichatone.com/product-guides/custom-chatbots/local-llm-models/ollama.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
