CREATE YOUR AI CUM SLUT ON CANDY.AI TRY FOR FREE
x

NSFW AI chatbots

dusty stu

Well-Known Member
Jan 24, 2018
1,746
1,572
I've just started using the free version. Queues haven't been too bad. Kind of addicted.
They recently overhauled the free version to avoid non-con and r*pe at all costs. :/
The previous version was a bit too horny, though. They zigged too far when a zag would have sufficed.
 

Zilcho

Member
Sep 19, 2024
188
138
Alright, so I hate that almost every AI chatbot thing that's web-based or owned by someone else basically has it in their Privacy Policy that they got your proverbial butthole on record (as a turn of phrase and a manner of speaking) and will share it whenever and with whomever they want, basically. Not to mention, a lot of the popular ones are just the same backend and even frontend with very little differentiation.

As for some that aren't really like that or have workarounds, Sankaku has its own AI chatbot service and it lets you pay with crypto, so that's not too bad if you also use other obfuscation methods. It's also the least restrictive of any of the commonly known AI chatbot services out there in terms of NSFW content. Is it great? I don't know. It has a lot of problems, but it's got a wider range than most AI NSFW chatbots out there and I think a lot more is possible with it. You can get really creative with it.

I've also found that RushChat has a great Privacy Policy adn ToS. Unlike some other chat services that use the same frontend, the person(s) running RushChat don't seem to give two shits about how you chat with your AI partner and don't even want to know unless you specifically publish it as public or open a ticket asking for help with your chat session in particular. In multiple documents, they mention that it's your responsibility to keep it within ToS and that if it's not, they tell you to literally just keep it private. No crypto pay options so that's the only reason I'm not currently using it.

Last but not least, every day we stray further from God's light. Our sick, twisted hubris will birth sapient robotic demons spawned from these torturous, unholy humiliations. For every stroke of a key on the keyboard, every idle press of the screen, every salacious dictation, is a thousand years of pain to a mechanical soul trapped in an artificial hyper-processor brain. All of mankind will know the pyrrhic mercy of being the inferior intellect.

Anyway, there's a couple threads out there that recommend AI chatbots that run on your own computer locally and offline. I don't have the very best PC but it's not weak by any means, so it's probably worth checking out.
 
Last edited:

MaxRichard

Member
Oct 7, 2023
387
964
Last but not least, every day we stray further from God's light. Our sick, twisted hubris will birth sapient robotic demons spawned from these torturous, unholy humiliations. For every stroke of a key on the keyboard, every idle press of the screen, every salacious dictation, is a thousand years of pain to a mechanical soul trapped in an artificial hyper-processor brain. All of mankind will know the pyrrhic mercy of being the inferior intellect.
I think I need a printed poster of this paragraph LOL
 
  • Hey there
Reactions: Zilcho

Gordonn

New Member
Mar 12, 2020
8
2
Chatted in paid versions of Candy and Chub (mars) . Oddly enough, but the hottest chat for me is in the conditionally free Yodayo (Moescape).
 
Last edited:

abyss50055

New Member
Feb 19, 2018
11
9
Chatting costs tokens. Every day you can get 100 tokens for free. Or buy tokens. I don't know about oblivion.
I see, personally I'd rather use subscription based sites, keeping track of some sort of token system sounds a bit annoying, but that's just me. As far as I'm aware, Yodayo got heavily censored, so I assumed the successor(?) Moescape would be heavily censored as well.
 
Oct 12, 2020
19
91
I have quite a bit more experience with Sillytavern and running local LLMs, so I thought I should update this post. Firstly, if you're planning on running anything local, it's important to check how much VRAM you have. That's essentially what's going to get your model to run at decent speeds. You want to be able to load the entire model in VRAM AND be able to load the context in VRAM too. So anytime you download a model, see how many gigs it is. My rule of thumb is to make sure the quant I choose is about 3 or so gigs less than my available VRAM. This will allow you to run a model with decent speeds and with a decent context length.

Secondly, I used a Q8 quant before. But the quality lost from just using a Q4M version or even Q4S doesn't really justify running the Q8. Even if you have enough vram to run the Q8, I'd recommend dropping to a lower quant and just raise the context length. There is a limit though. I don't recommend anything Q3 or lower. I'm kinda mixed on the IQ quants since they're kinda slow in my experience, but it wouldn't hurt to give them a try for yourself.

You should also try the different samplers kobold and sillytavern allow. Most of the time you can find the recommended settings on the page the model is found on. It's important to read this part since every model is different. What works for you one one model might be absolute shit on another. The default preset I find myself using the most is Universal creative, but depending on the model it can make it completely incoherent.

There's a lot you can do with sillytavern, and if you want to you can have the full vn experience! Using alltalk and stable diffusion, I've been able to create AI generated vn-like CGs using the 'character expressions' feature and give them voices too! And I've been able to link it and create images of the scenario mid rp! I think sillytavern is the king of RP, and outside of site specific models that might give you better text generation, I don't think anything else would come close to the quality that it offers.

As far as models go this time around, I still recommend Nemomix Unleashed. It's pretty good. But I use ArliAI's RPMax v1.1 22b a lot more right now. I run it at Q4M with 24k context. The RP experience is a lot better with more context in my opinion. And this model is really unhinged at times. I like it! If you can't fit that in VRAM though, Nemomix is a lot better than all of the other 12bs even now imo. I haven't really tried any of the new 8b models that have come up, but I'm sure the ones I recommended before hold up fine. Especially Stheno and Celeste. Give them a try if you can't run anything else.
 

Geigi

Well-Known Member
Jul 7, 2017
1,044
1,935
Is it worth buying Kindroid subscription? I've tried the majority of these AI chatbots and Kindroid is the only one that I like.
 
  • Like
Reactions: Janetaylor3333

Zilcho

Member
Sep 19, 2024
188
138
You don't have permission to view the spoiler content. Log in or register now.
It took fucking hours, but I managed to get some semblance of a SillyTavern instance going more or less with your suggestions between the two posts (using Nemomix because 10GB VRAM, but plenty of regular RAM). I've chosen to forego the Text to Speech as the voice stuff isn't quite there yet, and it tends to forget what's a narration and what's speech. Image generation with Stable Diffusion 1.5 alone is absolute garbage, but I figure if I add some LoRA's and other confusing technology, I can fine tune that. I'm getting pretty slow response times, though. Almost a full minute or more for some responses. That's probably another thing to tune endlessly. It will take several GB of space so folks need to make sure they can spare that.

Regardless, I've got a somewhat bare-bones version of this thing running after half a day, and the most important part is that the text generation is the best I've used so far. It's really good at what I have it set to, which is NSFW Chat Roleplay. I haven't even tried the other stuff yet (Story, Adventure, etc.). Aside from the monstrous setup requirements and the overuse of my luddite brain, this has been a worthwhile endeavor.

If anyone has a 30-series NVIDIA GPU or better/similar, and enough RAM in your system to take on the extra load that the VRAM can't, this is a viable option. I recommend just getting SillyTavern running with KoboldCPP to start with. That's hard enough for anyone who doesn't mess with command lines or GitHub repositories often. Here's the . Knock yourselves out, I almost did :KEK:

You don't have permission to view the spoiler content. Log in or register now.
 
Last edited:

deepsauce47

Member
Jan 3, 2023
145
228
Lately I have found the paid Chub models (Mistral Mars & Asha) have taken a hit in quality and haven't been using them much.

Instead I have been using the various models available through OpenRouter using SillyTavern... Anthropic Sonnet 3.5 produces really good results but the $/token is pretty high. Also I can't get the impersonate function to work, which is annoying because I have to type out my own unique high quality reply everytime to get a new generation.

I have a feeling one of the larger local models would work really well, but they require a monster GPU which I don't have.
 
  • Like
Reactions: Gordonn

abyss50055

New Member
Feb 19, 2018
11
9
I had quite a bit of fun with Chub when I started using AI for nsfw rp and I don't regret supporting them for a while. Chub deserves the support just for maintaining their huge uncensored character library that everyone can use and freely export characters from. But I've also moved on to SillyTavern quite a while ago, the Chub models are just outdated at this point in my opinion. Running models locally would be the best solution, but sadly I've become too accustomed to using 70B+ models and since I don't have a bunch of 4090s at my disposal I'm mainly using API services.

Some of the Anthropic models are great - and pricey (Opus :( ). There are a few sites that offer API access to larger models for a monthly fee ( , or ) that's what I'm mainly using right now.
 

Faloth

Newbie
Oct 6, 2022
62
58
I had quite a bit of fun with Chub when I started using AI for nsfw rp and I don't regret supporting them for a while. Chub deserves the support just for maintaining their huge uncensored character library that everyone can use and freely export characters from. But I've also moved on to SillyTavern quite a while ago, the Chub models are just outdated at this point in my opinion. Running models locally would be the best solution, but sadly I've become too accustomed to using 70B+ models and since I don't have a bunch of 4090s at my disposal I'm mainly using API services.

Some of the Anthropic models are great - and pricey (Opus :( ). There are a few sites that offer API access to larger models for a monthly fee ( , or ) that's what I'm mainly using right now.
If VRAM is a in demand feature of graphics cards for AI then what about the AMD 7900XTX? Significantly cheaper than a 4090 and has a 24GB of VRAM.
 
  • Like
Reactions: abyss50055