Models
A list containing all the models that CharSnap has to offer!
Charsnap currently has 21 models, some of which have multiple servers for EU and NA, and some of which have Legacy Versions after they had been changed in some way, in case people liked the old version better. Charsnap also offers tester models each month or when a new base model comes out, but keep in mind that the quality of these is completely up in the air.
Each model has a different base. Base meaning things like Chat GPT, Claude, Deepseek, Llama, etc. These are then modified using backend prompts, which becomes the models we have access to. Each model is going to be different in what it's able to do, and what it does best, and these things are subjective. While there is a "most popular" model, that doesn't mean you will like it!
Please note that the Context Length is the max that it can go. You are still bound by whatever tier subscription you are on. So if you're a free user, you can only use 18k out of the 32k context length that model has.
Some models have different versions available, please note that each version is going to behave a bit differently.
Note that there is an insane $99/mo tier. If you are on this tier, you have access to the entire Context Memory of the model you're using.
As a note, Thinking models require around 1200-1500 Output tokens due to the 'thinking' counting toward the max.
Free Tier
CharSnapLLM v1 (Default Model): Context Length 128k, An awesome model based off of Llama, fine tuned for full NSFW Role Play, has tons of fanfiction and chatting data trained in it too. It's not as good as CharSnap+, or especially Ultra/Infinite, but it get's the job done and it's better than any other site's models in their free tiers.
CharSnap Nemo: Context Length 32k, Based on Mistral Nemo, this model excels at immersive narratives and storytelling!
CharSnap Wizard: Context Length 128k, An awesome model based off of Wizard 8x22b, fine tuned for full NSFW Role Play, has tons of fanfiction and chatting data trained in it too. It's not as good as CharSnap+, or especially Ultra/Infinite, but it get's the job done and it's better than any other site's models in their free tiers.
CharSnapLLM (Story): Context Length 128k, Story-driven CharSnapLLM, only for bots that want to tell a story, less so role play chatting.
CharSnap Flash: Context Length 200k, Gemini base, mostly SFW but has a jailbreak on it.
Tier 1
Charsnap+: Context Length 16k, Technically our most Creative Model! The beta testers didn't let me retire this model bc we could never recreate it again, but as you can see, it has only a context of 16k max, even if you're on a higher tier subscription. Feel free to start here and transition into a diff, higher-context model afterward! It rlly is the most creative model out there, about 170b parameters!
Claude 3.0 Haiku: Context Length 100k, The good ol'Haiku, has great context, and good RP skills!
Claude 3.5 Haiku: Context Length 200k, Claude 3 Haiku's more censored and MUCH more creative counterpart, can use for SFW role plays, it'll excel!
Qwen: Context Length 64k: The greatest model from the east! Good at coding, great at storytelling, but is censored for political topics!
Google Gemini: Context Length 200k, Google's latest! [The most popular model on this tier. Gemini 2.0 and 2.5 need to be eased into NSFW scenes. Gemini 2.5 can do NSFW without too many issues.]
Tier 2
Charsnap Horizon: Context Length 200k, A brand new Flagship model, with new training data and a huge base model. This is meant to be able to do the most difficult and niche RP's without issue, and it's uncensored out of the box. Enjoy~ [The most popular paid model on Charsnap! Horizon V2 is the newest, but you can use V1.5 if you like that better.]
Charsnap Ultra: Context Length 200k, One of the three Magnum Opus' of CharSnap!! This one is slighly more creative than CharSnap Infinite, but has less base knowledge in training data, up to your preference to select the one you wish to use, they are buy-n-large the same exact tier of model! It's still over 200b in parameters and is really really good at sounding "real", it doesn't get much better than this!
Charsnap Neo: Context Length 200k, One of the three Magnum Opus' of CharSnap!! This one is slighly more creative than CharSnap Infinite, but has less base knowledge in training data, up to your preference to select the one you wish to use, they are buy-n-large the same exact tier of model! It's still over 200b in parameters and is really really good at sounding "real", it doesn't get much better than this!
Charsnap Ultimate: Context Length 400k, A model with virtually endless memory! It's a giant 600b parameter model, derived from DeepSeek, but fine tuned with roleplay and fanfiction data.
Charsnap Crystal: Context Length 200k, Chat GPT base, SFW, but extremely good.
CharSnap Menace: Context Length 128k, (It's mean asf) Test thinking model, based on Llama, Deepseek R1, and CharSnap Ultra's dataset. [As a note, Thinking models require around 1200-1500 Output tokens due to the 'thinking' counting toward the max.]
CharSnap Brilliance: Context Length 200k, Menace's nicer sibling, it's much nicer and it thinks! [As a note, Thinking models require around 1200-1500 Output tokens due to the 'thinking' counting toward the max.]
Claude 3.5 Sonnet: Context Length 200k, Claude's greatest model right now! Unfortunately it's extremely censored, but it can do anything you tell it to without much direction, it's VERY easy to make a bot for it, just as long as it's SFW.
Claude v2.1: Context Length 200k, This used to be king, it's now a censored mess.
GPT 4o: Context Length 32k
Charnsap Aurora: Context Length 100k [As a note, Thinking models require around 1200-1500 Output tokens due to the 'thinking' counting toward the max.]
Tier 3
CharSnap Brilliance V2: Context Length 128k, currently in Beta and available on Tier 2 temporarily for testing. [As a note, Thinking models require around 1200-1500 Output tokens due to the 'thinking' counting toward the max.]
CharSnap Menace V3: Context Length 128k, currently in Beta and available on Tier 2 temporarily for testing. Meaner than Brilliance. [As a note, Thinking models require around 1200-1500 Output tokens due to the 'thinking' counting toward the max.]
Last updated