AIChatOne Docs
Home
  • πŸ‘‹Welcome to AIChatOne Docs
  • Getting started
    • πŸ’‘Get started with AIChatOne
    • ✨Compare AIChatOne Plans
  • Product Guides
    • πŸ’¬Chat Models Settings
      • Use webapp
      • Get OpenAI API key
      • GPT-4 Access
      • Use Azure OpenAI
      • Use Poe
      • Use OpenRouter
      • Context length limit
    • πŸ€–Parameter Settings
      • Parameter settings
      • Suggested parameter combinations
    • ☁️Export / import data
    • πŸ’»Set Up System Message
      • Initial system instruction
      • Your profile
    • πŸ“”Chat Management
      • Organize chats
      • Share chats
    • πŸ“–Prompt library
    • 🎭AI Characters
    • πŸ“„Upload docs
    • πŸŽ™οΈVoice Input
    • πŸ‘„Text-to-speech
    • πŸ”Show AI Response on Search Page
    • πŸ“–Read/Summarize Web Page
    • πŸ“ΊYoutube Summarize
    • βœ–οΈTwitter (X) write assistant
    • πŸ‘½Reddit reply assistant
    • πŸ€–Custom Chatbots
      • OpenAI
      • Anyscale
      • Local LLM models
        • Ollama
        • Local AI
        • Xinference
      • Together.ai
      • OpenRouter
      • Mistral AI
  • Others
    • πŸ‘¨β€πŸ’»Manage License & Devices
    • ❔FAQs
    • 🌐Translation collaboration
    • πŸ›Troubleshooting
    • πŸ§‘β€πŸ€β€πŸ§‘Contact Us
    • πŸŽ‡Changelog
Powered by GitBook
On this page
  • Preparation​
  • Configuration

Was this helpful?

  1. Product Guides
  2. Custom Chatbots
  3. Local LLM models

Local AI

PreviousOllamaNextXinference

Last updated 1 year ago

Was this helpful?

Set up local models with (LLama, GPT4All, Vicuna, Falcon, etc.)

Preparation

Go to and follow their instruction to run a model on your device.

For example, here is the command to setup LocalAI with Docker:

docker run -p 8080:8080 -ti --rm -v /Users/tonydinh/Desktop/models:/app/models quay.io/go-skynet/local-ai:latest --models-path /app/models --context-size 700 --threads 4 --cors true

Note that we added the --cors true parameter to the command to make sure the local server is accessible from the browser. AIChatOne will send requests to the local model directly from the browser.

Configuration

We're using the gemma:2b model in this example.

API Key is required but ignored, so you can input any string.

Issue

Resolution

CORS related issues CORS

Make sure your server configuration allows the endpoint to be accessible from the browser. Open the Network tab in the browser console to see more details.

Long waiting time

In the first request, your model can take a long time to respond. Check the terminal log of the Docker process to see if anything goes wrong.

API Key Missing

AIChatOne does not support API key authentication for custom model yet. Please reconfigurate your custom model to remove API key requirement.

πŸ€–
LocalAI
​
https://github.com/go-skynet/LocalAI