AIChatOne Docs
Home
  • 👋Welcome to AIChatOne Docs
  • Getting started
    • 💡Get started with AIChatOne
    • ✨Compare AIChatOne Plans
  • Product Guides
    • 💬Chat Models Settings
      • Use webapp
      • Get OpenAI API key
      • GPT-4 Access
      • Use Azure OpenAI
      • Use Poe
      • Use OpenRouter
      • Context length limit
    • 🤖Parameter Settings
      • Parameter settings
      • Suggested parameter combinations
    • ☁️Export / import data
    • 💻Set Up System Message
      • Initial system instruction
      • Your profile
    • 📔Chat Management
      • Organize chats
      • Share chats
    • 📖Prompt library
    • 🎭AI Characters
    • 📄Upload docs
    • 🎙️Voice Input
    • 👄Text-to-speech
    • 🔍Show AI Response on Search Page
    • 📖Read/Summarize Web Page
    • 📺Youtube Summarize
    • ✖️Twitter (X) write assistant
    • 👽Reddit reply assistant
    • 🤖Custom Chatbots
      • OpenAI
      • Anyscale
      • Local LLM models
        • Ollama
        • Local AI
        • Xinference
      • Together.ai
      • OpenRouter
      • Mistral AI
  • Others
    • 👨‍💻Manage License & Devices
    • ❔FAQs
    • 🌐Translation collaboration
    • 🐛Troubleshooting
    • 🧑‍🤝‍🧑Contact Us
    • 🎇Changelog
Powered by GitBook
On this page

Was this helpful?

  1. Product Guides
  2. Chat Models Settings

Context length limit

LLMs evolve rapidly and you should get the latest models and limitations from official sources

As you may know, each chat model has a different context length limit:

  • GPT-3.5: 4,096 tokens

  • GPT-3.5-16k: 16,385 tokens

  • GPT-4: 8,192 tokens

  • GPT-4-32K: 32,768 tokens

  • Claude: 100,000 tokens

PreviousUse OpenRouterNextParameter Settings

Last updated 1 year ago

Was this helpful?

💬