- May 21, 2018
- 140
- 152
I've just started using the free version. Queues haven't been too bad. Kind of addicted.Has anyone gotten a subscriptions for Spicy Chat AI? Is it worth it? I lowkey wanna do it
I've just started using the free version. Queues haven't been too bad. Kind of addicted.Has anyone gotten a subscriptions for Spicy Chat AI? Is it worth it? I lowkey wanna do it
They recently overhauled the free version to avoid non-con and r*pe at all costs. :/I've just started using the free version. Queues haven't been too bad. Kind of addicted.
I think I need a printed poster of this paragraph LOLLast but not least, every day we stray further from God's light. Our sick, twisted hubris will birth sapient robotic demons spawned from these torturous, unholy humiliations. For every stroke of a key on the keyboard, every idle press of the screen, every salacious dictation, is a thousand years of pain to a mechanical soul trapped in an artificial hyper-processor brain. All of mankind will know the pyrrhic mercy of being the inferior intellect.
Chatting costs tokens. Every day you can get 100 tokens for free. Or buy tokens. I don't know about oblivion."Conditionally free"? I've never used it, but didn't Yodayo/Moescape get censored into oblivion?
I see, personally I'd rather use subscription based sites, keeping track of some sort of token system sounds a bit annoying, but that's just me. As far as I'm aware, Yodayo got heavily censored, so I assumed the successor(?) Moescape would be heavily censored as well.Chatting costs tokens. Every day you can get 100 tokens for free. Or buy tokens. I don't know about oblivion.
What's moenscape?Chatted in paid versions of Candy and Chub (mars). Oddly enough, but the hottest chat for me is in the conditionally free Yodayo (Moescape).
I have quite a bit more experience with Sillytavern and running local LLMs, so I thought I should update this post. Firstly, if you're planning on running anything local, it's important to check how much VRAM you have. That's essentially what's going to get your model to run at decent speeds. You want to be able to load the entire model in VRAM AND be able to load the context in VRAM too. So anytime you download a model, see how many gigs it is. My rule of thumb is to make sure the quant I choose is about 3 or so gigs less than my available VRAM. This will allow you to run a model with decent speeds and with a decent context length.
"I'm gonna be a bad parent that don't know what the fuck my mentally unstable kid is up to and then blame others for it."
It is, best one I tried so far.Is it worth buying Kindroid subscription? I've tried the majority of these AI chatbots and Kindroid is the only one that I like.
It took fucking hours, but I managed to get some semblance of a SillyTavern instance going more or less with your suggestions between the two posts (using Nemomix because 10GB VRAM, but plenty of regular RAM). I've chosen to forego the Text to Speech as the voice stuff isn't quite there yet, and it tends to forget what's a narration and what's speech. Image generation with Stable Diffusion 1.5 alone is absolute garbage, but I figure if I add some LoRA's and other confusing technology, I can fine tune that. I'm getting pretty slow response times, though. Almost a full minute or more for some responses. That's probably another thing to tune endlessly. It will take several GB of space so folks need to make sure they can spare that.You don't have permission to view the spoiler content. Log in or register now.
If VRAM is a in demand feature of graphics cards for AI then what about the AMD 7900XTX? Significantly cheaper than a 4090 and has a 24GB of VRAM.I had quite a bit of fun with Chub when I started using AI for nsfw rp and I don't regret supporting them for a while. Chub deserves the support just for maintaining their huge uncensored character library that everyone can use and freely export characters from. But I've also moved on to SillyTavern quite a while ago, the Chub models are just outdated at this point in my opinion. Running models locally would be the best solution, but sadly I've become too accustomed to using 70B+ models and since I don't have a bunch of 4090s at my disposal I'm mainly using API services.
Some of the Anthropic models are great - and pricey (Opus ). There are a few sites that offer API access to larger models for a monthly fee (You must be registered to see the links,You must be registered to see the linksorYou must be registered to see the links) that's what I'm mainly using right now.