Create and Fuck your AI Cum Slut –70% OFF
x

NSFW AI chatbots

SweatyDevil

Active Member
Jan 8, 2022
808
2,266
367
It seems that JanitorAI is over. A lot of Dead Dove creator's bots are banned and some of them are deleting their accounts and leaving. The owner and mods are silent about it and people wants answers. Based on what I read, new ToS is out and JanitorAI succumbed to payment processors. More here I'm so sad because Janitor is the best!
From what I've heard and seen myself, the site is definitely approaching its final days. The level of censorship on the site seems quite harsh to me, to the extent that even a lot of SFW content is not allowed. If your bot's profile picture shows cleavage, it will be taken down. If the image displays any sign of clothed boobs or ass, it will be taken down. If you mention your sister, daughter, or any family member, even if the bot is just for fluff, it will be taken down. I've also heard that the censorship of 'words' is quite severe, making it pretty difficult to even create a standard bot description sometimes. Additionally, the default janitor chat AI is also censored as well.

If I remember the censorship started at first only because they wanted to "please" some inventors, and it got censored further later on cause they want to add premium to the site, and want to make sure they won't get into trouble with payment processors.
 

D0v4hk1n

Active Member
Oct 4, 2017
883
1,243
246
It seems that JanitorAI is over. A lot of Dead Dove creator's bots are banned and some of them are deleting their accounts and leaving. The owner and mods are silent about it and people wants answers. Based on what I read, new ToS is out and JanitorAI succumbed to payment processors. More here I'm so sad because Janitor is the best!
Noooo!! That's sad but worry not my friend. Chub.ai is a good replacement now and it doesn't have any bs restrictions for better or worse. Reading this news, so happy I moved to chub when jAI banned wincest. Banning wincest is the first sign that they're going the slope, the good old slippery slope.

I came here to share something I really loved with you guys but this is really sad news.

Was gonna talk about Expression Cards and animated GIF portraits on chub. Are you guys seeing this shit?

- CG like RPGM for the character THAT CHANGES DEPENDING ON THE SCENE!

- ANIMATED GIF profile.

The future is here. This is amazing.
 

ThanefromKaos

Active Member
Jan 30, 2023
581
867
226
Noooo!! That's sad but worry not my friend. Chub.ai is a good replacement now and it doesn't have any bs restrictions for better or worse. Reading this news, so happy I moved to chub when jAI banned wincest. Banning wincest is the first sign that they're going the slope, the good old slippery slope.

I came here to share something I really loved with you guys but this is really sad news.

Was gonna talk about Expression Cards and animated GIF portraits on chub. Are you guys seeing this shit?

- CG like RPGM for the character THAT CHANGES DEPENDING ON THE SCENE!

- ANIMATED GIF profile.

The future is here. This is amazing.
Looks promising, gonna check it out
 
  • Jizzed my pants
Reactions: D0v4hk1n

PrivateEyes

Member
May 26, 2017
242
412
285
I'm glad I took the leap and spent an (admittedly) long time learning how to run a local model through Kobold cpp + sillytavern. Granted, responses can take minutes with my garbage amount of vram for a 12b model, but it's free and I found an llm that writes pretty well.

Another plus side is that I don't have to worry about a service getting censored. After what happened to Yodayo, I realized this would keep happening until someone reigns in the payment processors.
 
  • Like
  • Yay, update!
Reactions: Zilcho and D0v4hk1n

fbass

Active Member
May 18, 2017
726
1,073
388
I'm glad I took the leap and spent an (admittedly) long time learning how to run a local model through Kobold cpp + sillytavern. Granted, responses can take minutes with my garbage amount of vram for a 12b model, but it's free and I found an llm that writes pretty well.

Another plus side is that I don't have to worry about a service getting censored. After what happened to Yodayo, I realized this would keep happening until someone reigns in the payment processors.
What LLM are you using?
 

D0v4hk1n

Active Member
Oct 4, 2017
883
1,243
246
I'm glad I took the leap and spent an (admittedly) long time learning how to run a local model through Kobold cpp + sillytavern. Granted, responses can take minutes with my garbage amount of vram for a 12b model, but it's free and I found an llm that writes pretty well.

Another plus side is that I don't have to worry about a service getting censored. After what happened to Yodayo, I realized this would keep happening until someone reigns in the payment processors.
Glad this works for you man. I am personally using Chub with proxy
 

Zilcho

Member
Sep 19, 2024
300
391
97
I'm glad I took the leap and spent an (admittedly) long time learning how to run a local model through Kobold cpp + sillytavern. Granted, responses can take minutes with my garbage amount of vram for a 12b model, but it's free and I found an llm that writes pretty well.
Welcome to the LocalHeads, brother. My experience was similar to yours, but I tried it late last year thanks to the suggestion from another user. If you can contain all of your model + context space in your VRAM, responses will take seconds and not minutes. It's when it spills over into System RAM that everything grinds down to 1st gear. Turns out running both the text model and an image model at the same time was too much :KEK: I've since obtained much more VRAM, and it's a blast.
 

PrivateEyes

Member
May 26, 2017
242
412
285
Welcome to the LocalHeads, brother. My experience was similar to yours, but I tried it late last year thanks to the suggestion from another user. If you can contain all of your model + context space in your VRAM, responses will take seconds and not minutes. It's when it spills over into System RAM that everything grinds down to 1st gear. Turns out running both the text model and an image model at the same time was too much :KEK: I've since obtained much more VRAM, and it's a blast.
Thanks, friend. My laptop has 6gb of VRAM, so being able to run a 12b model at all is a miracle :HideThePain:
It's the main reason why I run koboldcpp. It's the only backend that allows offloading.
 
  • Sad
Reactions: Zilcho

tretch95

Well-Known Member
Nov 5, 2022
1,427
2,699
387
Quick update on the perchance.org LLM update disaster, the owner/dev actually posted that he's aware of the issues and wants to thank everyone for the complaints and reports.



Apparently the new model is indeed Deepseek, as another user posted evidence in one of the replies in that thread (the AI not only answers what model it is but gives a whole commercial post).

Well that would explain why it often sounds more like a Machine Translation from Chinese, where sentences are missing all kinds of required syntax elements.



I also gave the perchance AI-RPG another shot today, and the result:

> MC 18yo has to spend time during summer break with his grounded sister 16yo (given scenario)
> MC helpfully wipes some jam off his sister's face during breakfast
> (awkward moment)
> Sister: "You... you had jam on your face..."

So this Deepseek model isn't even able to maintain the most basic logical context. In the example case, it completely switched the actor roles about what it wrote three lines ago. Because it wasn't the MC having jam on his face, but the sister.

And this is happening in about every third paragraph if you just let the AI write the story without giving exact instructions for the NPCs (though the input should only be for player actions).

Thus i can't really understand what the dev means with "more intelligent" because this model is so much more stupid than the old Llama model ever could be.
 

Kraos

Member
Jan 8, 2018
215
330
183
Apparently you can get "free" Claude (Sonnet & Opus), Gemini, GPT with . By free you get a lot of credits but they give you ways to gain credits. The only money you might need to pay is if you want to upgrade to pro/dev and it is optional. I donated $5. Don't know if you can donate lower to get it.
Right now, signing up works best with Google so use an alt account even if you trust this or not.

Not sure how long this will last for but you can try while it is still available.
 
  • Like
Reactions: fbass

D0v4hk1n

Active Member
Oct 4, 2017
883
1,243
246
Apparently you can get "free" Claude (Sonnet & Opus), Gemini, GPT with . By free you get a lot of credits but they give you ways to gain credits. The only money you might need to pay is if you want to upgrade to pro/dev and it is optional. I donated $5. Don't know if you can donate lower to get it.
Right now, signing up works best with Google so use an alt account even if you trust this or not.

Not sure how long this will last for but you can try while it is still available.
Important to note there's a risk they will read your messages. Nothing is free in this world.
 
  • Like
Reactions: Zilcho and fbass

desmosome

Conversation Conqueror
Sep 5, 2018
6,827
15,435
864
I gave up chatbots after Google banned my throwaway accounts. Curious about gemini 3. Anyone tried it for gooning?
 

fbass

Active Member
May 18, 2017
726
1,073
388
I gave up chatbots after Google banned my throwaway accounts. Curious about gemini 3. Anyone tried it for gooning?
I haven't been able to get it to work yet, at least not the free version. I've heard it's the best though.
 

80773

Newbie
Apr 25, 2019
49
58
78
i use LLM (chat bots) locally with LM-Studio. With that i use the 'system prompt' to tell the ai how i want it to behave.
So it don't matter what ai model i use, they act as told. No limits, no censored, only you're own rules. But it does works better uncensored version that have actual training with lewd stuff, tired of ChatGPT wanting me to fuck their nose or other things that don't make sense. Another trick is to use "role-play as" followed by what you want it to do, seems to do the trick for some Ai but not all of them.

i know almost nothing about Ai chat-bot online service, other that you might find free and open ai on for example " " to try out, but they often seems to be limited.
 

voxsunderland

Newbie
Feb 22, 2021
76
158
157
I was only recently able to get into this, because I had to wait for ROCm to mature for my 9070XT to work its power.

I had very poor experience previously with my RTX 3070 so I recently revisited this tech with low expectations. Talking about responses to moderate greeting under context prompts that took MINUTES to output.

Now most responses come out almost INSTANTLY. Like what the hell, this is possible on my LOCAL machine with a MODERATE rig? Granted 9070XT's prices have ballooned recently too, but I bought this when no one cared about it.

Thanks to ROCm having sufficiently matured, I have gone from being an AMD GPU defeatist to setting up LLMs on my main rig to use with my work laptop for vibe coding, & getting railed to the point of being broken in my latest roleplay. lol.