Princess Groundhog
Well-Known Member
- Nov 5, 2018
- 1,648
- 3,903
- 480
From what I've heard and seen myself, the site is definitely approaching its final days. The level of censorship on the site seems quite harsh to me, to the extent that even a lot of SFW content is not allowed. If your bot's profile picture shows cleavage, it will be taken down. If the image displays any sign of clothed boobs or ass, it will be taken down. If you mention your sister, daughter, or any family member, even if the bot is just for fluff, it will be taken down. I've also heard that the censorship of 'words' is quite severe, making it pretty difficult to even create a standard bot description sometimes. Additionally, the default janitor chat AI is also censored as well.It seems that JanitorAI is over. A lot of Dead Dove creator's bots are banned and some of them are deleting their accounts and leaving. The owner and mods are silent about it and people wants answers. Based on what I read, new ToS is out and JanitorAI succumbed to payment processors. More hereYou must be registered to see the linksI'm so sad because Janitor is the best!
Noooo!! That's sad but worry not my friend. Chub.ai is a good replacement now and it doesn't have any bs restrictions for better or worse. Reading this news, so happy I moved to chub when jAI banned wincest. Banning wincest is the first sign that they're going the slope, the good old slippery slope.It seems that JanitorAI is over. A lot of Dead Dove creator's bots are banned and some of them are deleting their accounts and leaving. The owner and mods are silent about it and people wants answers. Based on what I read, new ToS is out and JanitorAI succumbed to payment processors. More hereYou must be registered to see the linksI'm so sad because Janitor is the best!
Looks promising, gonna check it outNoooo!! That's sad but worry not my friend. Chub.ai is a good replacement now and it doesn't have any bs restrictions for better or worse. Reading this news, so happy I moved to chub when jAI banned wincest. Banning wincest is the first sign that they're going the slope, the good old slippery slope.
I came here to share something I really loved with you guys but this is really sad news.
Was gonna talk about Expression Cards and animated GIF portraits on chub. Are you guys seeing this shit?
You must be registered to see the links- CG like RPGM for the character THAT CHANGES DEPENDING ON THE SCENE!
You must be registered to see the links- ANIMATED GIF profile.
The future is here. This is amazing.
I just finished watching Infinity War lmaoLooks promising, gonna check it out
What LLM are you using?I'm glad I took the leap and spent an (admittedly) long time learning how to run a local model through Kobold cpp + sillytavern. Granted, responses can take minutes with my garbage amount of vram for a 12b model, but it's free and I found an llm that writes pretty well.
Another plus side is that I don't have to worry about a service getting censored. After what happened to Yodayo, I realized this would keep happening until someone reigns in the payment processors.
Glad this works for you man. I am personally using Chub with proxyI'm glad I took the leap and spent an (admittedly) long time learning how to run a local model through Kobold cpp + sillytavern. Granted, responses can take minutes with my garbage amount of vram for a 12b model, but it's free and I found an llm that writes pretty well.
Another plus side is that I don't have to worry about a service getting censored. After what happened to Yodayo, I realized this would keep happening until someone reigns in the payment processors.
Welcome to the LocalHeads, brother. My experience was similar to yours, but I tried it late last year thanks to the suggestion from another user. If you can contain all of your model + context space in your VRAM, responses will take seconds and not minutes. It's when it spills over into System RAM that everything grinds down to 1st gear. Turns out running both the text model and an image model at the same time was too muchI'm glad I took the leap and spent an (admittedly) long time learning how to run a local model through Kobold cpp + sillytavern. Granted, responses can take minutes with my garbage amount of vram for a 12b model, but it's free and I found an llm that writes pretty well.
Thanks, friend. My laptop has 6gb of VRAM, so being able to run a 12b model at all is a miracleWelcome to the LocalHeads, brother. My experience was similar to yours, but I tried it late last year thanks to the suggestion from another user. If you can contain all of your model + context space in your VRAM, responses will take seconds and not minutes. It's when it spills over into System RAM that everything grinds down to 1st gear. Turns out running both the text model and an image model at the same time was too muchI've since obtained much more VRAM, and it's a blast.
Important to note there's a risk they will read your messages. Nothing is free in this world.Apparently you can get "free" Claude (Sonnet & Opus), Gemini, GPT withYou must be registered to see the links. By free you get a lot of credits but they give you ways to gain credits. The only money you might need to pay is if you want to upgrade to pro/dev and it is optional. I donated $5. Don't know if you can donate lower to get it.
Right now, signing up works best with Google so use an alt account even if you trust this or not.
Not sure how long this will last for but you can try while it is still available.