Local AI

Set up local models with LocalAI (LLama, GPT4All, Vicuna, Falcon, etc.)

Preparation​

Go to https://github.com/go-skynet/LocalAI and follow their instruction to run a model on your device.

For example, here is the command to setup LocalAI with Docker:

docker run -p 8080:8080 -ti --rm -v /Users/tonydinh/Desktop/models:/app/models quay.io/go-skynet/local-ai:latest --models-path /app/models --context-size 700 --threads 4 --cors true

Note that we added the --cors true parameter to the command to make sure the local server is accessible from the browser. AIChatOne will send requests to the local model directly from the browser.

Configuration

We're using the gemma:2b model in this example.

API Key is required but ignored, so you can input any string.

Last updated