charsnap
  • Getting Started
    • Welcome
  • Basics
    • Home
    • Character Creation
      • Bot Formatting Styles
      • Xoul Import Guide
    • My Profile
    • Conversations
    • My Favourites
    • My Characters
    • Search
    • Attachments
    • Worlds
    • Image Generation
    • Image Templates
    • Bot Profile
    • Chatting
      • Models
        • Model Settings
      • Recollections
    • Subscriptions
      • Subscription Tier Overview
        • Individual Tier Benefits
      • Models
        • Model Settings
  • Themes
  • Tips
    • What are tokens?
    • Coming from Xoul?
  • Markdown Guide
Powered by GitBook
On this page
  • Free Tier
  • Tier 1
  • Tier 2
  • Tier 3
  1. Basics
  2. Chatting

Models

A list containing all the models that CharSnap has to offer!

PreviousChattingNextModel Settings

Last updated 13 hours ago

Charsnap currently has 21 models, some of which have multiple servers for EU and NA, and some of which have Legacy Versions after they had been changed in some way, in case people liked the old version better. Charsnap also offers tester models each month or when a new base model comes out, but keep in mind that the quality of these is completely up in the air.

Each model has a different base. Base meaning things like Chat GPT, Claude, Deepseek, Llama, etc. These are then modified using backend prompts, which becomes the models we have access to. Each model is going to be different in what it's able to do, and what it does best, and these things are subjective. While there is a "most popular" model, that doesn't mean you will like it!

Please note that the Context Length is the max that it can go. You are still bound by whatever tier subscription you are on. So if you're a free user, you can only use 18k out of the 32k context length that model has. There is an insane $99/mo tier. If you are on this tier, you have access to the entire Context Memory of the model you're using. According to the dev, *all* models everywhere begin to decay after reaching 80k context history, regardless of the number attached to them. This is not a Charsnap thing, but a model thing.

What is context memory? Context memory has to do with . The higher the context memory, the more your chat will remember.

Some models have different versions available, please note that each version is going to behave a bit differently, and every model is going to give different kinds of responses.

Thinking models require around 1200-1500 Output tokens due to the 'thinking' counting toward the max.

A Thinking LLM (Large Language Model) in the context of a chatbot site refers to an advanced AI model designed to perform more than just basic response generation. It uses reasoning, multi-step problem solving, and memory to process complex questions, follow intricate conversations, and produce more contextually aware and logically consistent replies.

Free Tier

CharSnapLLM Classic (Default Model): Context Length 128k, An awesome model based off of Llama, fine tuned for full NSFW Role Play, has tons of fanfiction and chatting data trained in it too. It's not as good as CharSnap+, or especially Ultra/Infinite, but it get's the job done and it's better than any other site's models in their free tiers.

CharSnapLLM (NEW!): Context Length 128k, Base Hermes 7 70b, An awesome model based off of Llama/Hermes, fine tuned for full NSFW Role Play, has tons of fanfiction and chatting data trained in it too. It's not as good as CharSnap+, or especially Ultra/Infinite, but it get's the job done and it's better than any other site's models in their free tiers.

CharSnap Nemo: Context Length 32k, Based on Mistral Nemo, this model excels at immersive narratives and storytelling!

CharSnap Wizard: Context Length 128k, An awesome model based off of Wizard 8x22b, fine tuned for full NSFW Role Play, has tons of fanfiction and chatting data trained in it too. It's not as good as CharSnap+, or especially Ultra/Infinite, but it get's the job done and it's better than any other site's models in their free tiers.

CharSnapLLM (Story): Context Length 128k, Story-driven CharSnapLLM, only for bots that want to tell a story, less so role play chatting.

Claude 3.0 Haiku: Context Length 100k, The good ol'Haiku, has great context, and good RP skills!

CharSnap Flash: Context Length 200k, An extremely fast Gemini model that's been jailbroken to hell to allow ok levels of NSFW role play!

CharSnap Twilight: Context Length 128k, A brand new model from Llama!! The base model isn't the greatest, so it is lora-tuned to be a lot better with Role Play + it is custom prompted! It's quite nice!

CharSnap Eclipse: Context Length 128k, This model doesn't have the "thinking" component. A nice 400B parameter model. [Eclipse tends to cause bots to be more aggressive.]

Tier 1

Charsnap+: Context Length 16k, Technically our most Creative Model! The beta testers didn't let me retire this model bc we could never recreate it again, but as you can see, it has only a context of 16k max, even if you're on a higher tier subscription. Feel free to start here and transition into a diff, higher-context model afterward! It rlly is the most creative model out there, about 170b parameters!

Claude 3.5 Haiku: Context Length 200k, Claude 3 Haiku's more censored and MUCH more creative counterpart, can use for SFW role plays, it'll excel!

Qwen: Context Length 64k: The greatest model from the east! Good at coding, great at storytelling, but is censored for political topics!

Google Gemini: Context Length 200k, Google's latest! [The most popular model on this tier. Gemini 2.0 and 2.5 need to be eased into NSFW scenes. Gemini 2.5 can do NSFW without too many issues.]

Tier 2

Charsnap Horizon: Context Length 200k, A brand new Flagship model, with new training data and a huge base model. This is meant to be able to do the most difficult and niche RP's without issue, and it's uncensored out of the box. Enjoy~ [The most popular paid model on Charsnap! Horizon V2 is the newest, but you can use V1.5 if you like that better.]

Charsnap Ultra: Context Length 200k, One of the three Magnum Opus' of CharSnap!! This one is slighly more creative than CharSnap Infinite, but has less base knowledge in training data, up to your preference to select the one you wish to use, they are buy-n-large the same exact tier of model! It's 405b in parameters and is really really good at sounding "real", it doesn't get much better than this!

Charsnap Neo: Context Length 200k, One of the three Magnum Opus' of CharSnap!! This one is slighly more creative than CharSnap Infinite, but has less base knowledge in training data, up to your preference to select the one you wish to use, they are buy-n-large the same exact tier of model! It's 405b in parameters and is really really good at sounding "real", it doesn't get much better than this!

Charsnap Ultimate: Context Length 400k, A model with virtually endless memory! It's a giant 600b parameter model, derived from DeepSeek, but fine tuned with roleplay and fanfiction data.

Charsnap Crystal: Context Length 200k, Chat GPT base, SFW, but extremely good.

CharSnap Menace: Context Length 128k, (It's mean asf) Test thinking model, based on Llama, Deepseek R1, and CharSnap Ultra's dataset. [As a note, Thinking models require around 1200-1500 Output tokens due to the 'thinking' counting toward the max.]

CharSnap Brilliance: Context Length 200k, Menace's nicer sibling, it's much nicer and it thinks! [As a note, Thinking models require around 1200-1500 Output tokens due to the 'thinking' counting toward the max.]

Claude 3.5 Sonnet: Context Length 200k, Claude's greatest model right now! Unfortunately it's extremely censored, but it can do anything you tell it to without much direction, it's VERY easy to make a bot for it, just as long as it's SFW.

Claude v2.1: Context Length 200k, This used to be king, it's now a censored mess.

GPT 4o: Context Length 32k

CharSnap Aurora: Context Length 100k [As a note, Thinking models require around 1200-1500 Output tokens due to the 'thinking' counting toward the max.]

CharSnap Eclipse Thinking: Context Length 128k, A nice 400B parameter THINKING model that's up and coming! [As a note, Thinking models require around 1200-1500 Output tokens due to the 'thinking' counting toward the max.]

Tier 3

CharSnap Brilliance V2: Context Length 128k, currently in Beta and available on Tier 2 temporarily for testing. [As a note, Thinking models require around 1200-1500 Output tokens due to the 'thinking' counting toward the max.]

CharSnap Menace V3: Context Length 128k, currently in Beta and available on Tier 2 temporarily for testing. Meaner than Brilliance. [As a note, Thinking models require around 1200-1500 Output tokens due to the 'thinking' counting toward the max.]

CharSnap Eclipse Pro: Context Length 128k, Pro version of Eclipse!! Mistral/Qwen Base.

Tokens