CREATE YOUR AI CUM SLUT ON CANDY.AI TRY FOR FREE
x

NSFW AI chatbots

Oct 12, 2020
19
91
It took fucking hours, but I managed to get some semblance of a SillyTavern instance going more or less with your suggestions between the two posts (using Nemomix because 10GB VRAM, but plenty of regular RAM). I've chosen to forego the Text to Speech as the voice stuff isn't quite there yet, and it tends to forget what's a narration and what's speech. Image generation with Stable Diffusion 1.5 alone is absolute garbage, but I figure if I add some LoRA's and other confusing technology, I can fine tune that. I'm getting pretty slow response times, though. Almost a full minute or more for some responses. That's probably another thing to tune endlessly. It will take several GB of space so folks need to make sure they can spare that.

Regardless, I've got a somewhat bare-bones version of this thing running after half a day, and the most important part is that the text generation is the best I've used so far. It's really good at what I have it set to, which is NSFW Chat Roleplay. I haven't even tried the other stuff yet (Story, Adventure, etc.). Aside from the monstrous setup requirements and the overuse of my luddite brain, this has been a worthwhile endeavor.

If anyone has a 30-series NVIDIA GPU or better/similar, and enough RAM in your system to take on the extra load that the VRAM can't, this is a viable option. I recommend just getting SillyTavern running with KoboldCPP to start with. That's hard enough for anyone who doesn't mess with command lines or GitHub repositories often. Here's the . Knock yourselves out, I almost did :KEK:

You don't have permission to view the spoiler content. Log in or register now.
Glad I could point you in the right direction! If you're having issues with the generation speed and you're using Nemomix at Q4M with 10GB VRAM, you should have more than enough VRAM to run it much faster. When you open koboldccp, try adjusting the "GPU Layers" setting to the max it'll go (it'll say something like 21/46 or something similar on the side of the box, just change the -1 in the box to 46 in this instance to run the whole model on GPU). See if that helps with the speed any. I'll always run my models at max for ~10 t/s or so depending on what model I'm using.

And if you want to use TTS and have it understand narration, go to Extentions > TTS and scroll down to the Alltalk settings, you can adjust the "Text Not Inside * or " is" setting and set it to character or narrator. Some character cards use white text instead of grey text to describe actions, so you might have to adjust that often. But yeah, the tts options right now is no elevenlabs, but after training alltalk is much better than you'd expect.
 
  • Yay, update!
Reactions: Zilcho

abyss50055

New Member
Feb 19, 2018
11
9
If VRAM is a in demand feature of graphics cards for AI then what about the AMD 7900XTX? Significantly cheaper than a 4090 and has a 24GB of VRAM.
Due to nvidias stranglehold over AI their cards are always the first that come to my mind, but AMD GPUs are a good suggestion. More affordable than nvidia, but still expensive since multiple of these high end GPUs are needed to run large models at decent speeds. For now I'll keep using APIs, but I'll look into buying some GPU(s) next year.
 
Oct 12, 2020
19
91
If VRAM is a in demand feature of graphics cards for AI then what about the AMD 7900XTX? Significantly cheaper than a 4090 and has a 24GB of VRAM.
VRAM is the most important but it'd be better to just go for a 3090 instead. It's the same VRAM but only costs about $500 if you get it used or refurbished.
 
  • Like
Reactions: Zilcho

Zilcho

Member
Sep 19, 2024
188
138
Glad I could point you in the right direction! If you're having issues with the generation speed and you're using Nemomix at Q4M with 10GB VRAM, you should have more than enough VRAM to run it much faster. When you open koboldccp, try adjusting the "GPU Layers" setting to the max it'll go (it'll say something like 21/46 or something similar on the side of the box, just change the -1 in the box to 46 in this instance to run the whole model on GPU). See if that helps with the speed any. I'll always run my models at max for ~10 t/s or so depending on what model I'm using.

And if you want to use TTS and have it understand narration, go to Extentions > TTS and scroll down to the Alltalk settings, you can adjust the "Text Not Inside * or " is" setting and set it to character or narrator. Some character cards use white text instead of grey text to describe actions, so you might have to adjust that often. But yeah, the tts options right now is no elevenlabs, but after training alltalk is much better than you'd expect.
VRAM is king, as you say. I found a decent image model that can kinda sorta generate somewhat acceptable images some of the time - but it eats up all my VRAM without even doing anything. That, in turn, leaves a lot less for Kobold/SillyTavern to work with. Shutting down stable diffusion offers me much more acceptable response times (~4-5 tokens per second or higher, as opposed to less than 1 token).

I'm having fun messing with and finetuning prompts for Stable Diffusion on its own webui instead of integrating it with Silly Tavern so not having in-line image generation doesn't bother me. As I mentioned previously, it took a lot of work to find something that was only sometimes okay. Now, if there were a way for the Chat Completion API to prioritize using the VRAM or for the Stable Diffusion python environment to not fucking take up half my GPU memory while idling, that would be ideal. The job is never finished, lads.
 

Yurei

Newbie
Aug 6, 2017
57
96
As some other i started to tried, and i must say that some are pretty impressive.

The most one who impressive me was gptgirlfriend, but it's expensive, i strill hesitate to suscribe.
I tried spicychat but the free version, get me strange things sometimes (or more forget things).

I tried nomi ai in free but well i don't know, i got the feel that there's something missing.
Kindroid isn't too bad, i got a normal conversation about some things and it was funny.
I didn't tried nectar ai but seems it's a good ai chabot and not too much expensive.

If you have something or some suggestion to say/propose, feel free.
 

SweatyDevil

Member
Jan 8, 2022
480
1,199
As some other i started to tried, and i must say that some are pretty impressive.

The most one who impressive me was gptgirlfriend, but it's expensive, i strill hesitate to suscribe.
I tried spicychat but the free version, get me strange things sometimes (or more forget things).

I tried nomi ai in free but well i don't know, i got the feel that there's something missing.
Kindroid isn't too bad, i got a normal conversation about some things and it was funny.
I didn't tried nectar ai but seems it's a good ai chabot and not too much expensive.

If you have something or some suggestion to say/propose, feel free.
Janitor ai it's hands down the best one in my opinion and it's fully free. I set it up with cosmosRP proxy and it's great.
 
  • Heart
  • Thinking Face
Reactions: Zilcho and D0v4hk1n

SweatyDevil

Member
Jan 8, 2022
480
1,199
What is CosmosRP? And is there a way to pay for more Tokens? I want MOAR loool
You don't have to to pay for CosmoRP, it's fully free and really good. It's a one of the models used for chatting with those bots. Just head to the discord of it, and there's everything. From tutorial, to anything else.
 
  • Like
Reactions: D0v4hk1n

Yurei

Newbie
Aug 6, 2017
57
96
I looked a little on janitor but i don't know if it's moderate but i see some things that shouldn't be allowed. It made me a little uncomfortable.

"Your guardian angel who behave as ch**d" with a loli picture :s
I fall on something else ysterday but i don't remember which one but it's a little the same kind of things. And i was jus looking in trending if i remember well or girl option.
 
  • Haha
Reactions: D0v4hk1n

SweatyDevil

Member
Jan 8, 2022
480
1,199
I looked a little on janitor but i don't know if it's moderate but i see some things that shouldn't be allowed. It made me a little uncomfortable.

"Your guardian angel who behave as ch**d" with a loli picture :s
I fall on something else ysterday but i don't remember which one but it's a little the same kind of things. And i was jus looking in trending if i remember well or girl option.
Those stuff aren't allowed there if I recall correctly. The mods are just slower at deleting those bots sometimes.
 

D0v4hk1n

Member
Oct 4, 2017
423
592
I looked a little on janitor but i don't know if it's moderate but i see some things that shouldn't be allowed. It made me a little uncomfortable.

"Your guardian angel who behave as ch**d" with a loli picture :s
I fall on something else ysterday but i don't remember which one but it's a little the same kind of things. And i was jus looking in trending if i remember well or girl option.
I'm against those things as well but dude so what? If you start censoring things you don't find comfortable you will end up wtih a platform like Spicy which doesnt even allow the word Stepsister if I recall correctly
 
  • Haha
Reactions: Zilcho and Nadekai

Yurei

Newbie
Aug 6, 2017
57
96
No they allowed it. And i have nothing against taboo things etc etc (even bestiality even if i find that disgussting).

But if you are ok with child content, good for you, but not for me. It's a hard limit.
 
  • Wow
Reactions: Zilcho

D0v4hk1n

Member
Oct 4, 2017
423
592
No they allowed it. And i have nothing against taboo things etc etc (even bestiality even if i find that disgussting).

But if you are ok with child content, good for you, but not for me. It's a hard limit.
I didn't say I'm ok with it. It's fucked up and completely disgusting but I learned that these things are like a slippery slope. Scared the censorship train wouldn't end there.
 
  • Like
Reactions: Zilcho

Geigi

Well-Known Member
Jul 7, 2017
1,044
1,935
Kindroid is back again with 50 free msg! I've tried the majority of nsfw AI chatbots and only Kindroid is up to my liking.
 
  • Thinking Face
Reactions: Zilcho