Create your AI Cum Slut for Valentine’s Day 60% OFF Now
x

NSFW AI chatbots

desmosome

Conversation Conqueror
Sep 5, 2018
6,486
14,807
There is nothing piracy related with Colab. It's 100% a legit service offered by Google and used to be the meta for using AI for free. However, it's sort of an outdated option, but not necessarily a horrible one. The main problem is that Local LLMs will always be weaker than paid options (or those free through Openrouter). The main benefits for local models are no censorship, total privacy and no cost.

However, when you use Colab, you're losing some of the benefits. The way Colab works, is that you're borrowing a GPU from a Google server. There are actually a few services like this out there, but Google is the only one that offers limited free use. That's where the first problem is. Colab isn't totally free, there is daily usage caps. Those caps are fairly decent, and can be extended by using lower context sizes and lesser powerful models, but by doing that, you're further limiting the already under-powered local models.

The second significant drawback is privacy. Since this is a Google service, there's no telling what sort of information they're getting from you. Running a service like this for free would be extremely expensive. They must be making their money back someway.

As long as you're aware of the drawbacks to using local LLMs and Colab, it's 100% an okay option.
What's the deal with openrouter anyways? I'm trying to free models but it says I lack credits.
 

PrivateEyes

Member
May 26, 2017
168
289
Okay, probably a really stupid question, but I'm curious anyways.

Anyone know if running a local llm on PC uses a lot of power (aka Sillytavern in my case)? Last thing I want to see is a sudden spike in my electricity bill. Maybe it's just me incorrectly assuming it is the same as crypto mining lol
 

chainedpanda

Active Member
Jun 26, 2017
659
1,182
What's the deal with openrouter anyways? I'm trying to free models but it says I lack credits.
Depends on what front-end your using. Some are easier to setup than others. It would likely be best to look it up on Reddit. But, if your using ST:

- Click the plug Icon (Top, second option)
- Under API, select either Chat Completion, or Text Completion, locate the Openrouter option
- Under Open-router model, find a free option (they're marked as free, or you look on the OR website.)
- Click "Authorize", it'll open up the OR website, just accept. Since it's free, don't bother setting a limit.

Wait for the plug to stop blinking = Done. You'll see a green circle at the bottom of the plug tab saying "Valid".

If your using Janitor AI, I did it once, But I don't remember all the steps. I believe you have to go to Openrouter settings, select a default model, create an API key, then go to Janitor AI and fill it out there. I don't use Janitor AI in general, so I dunno the best practices. When I tried it, I had to find a card that allowed proxies, then browse through the settings on the card itself to set the model using the API i created. It was a hassle tbh.

I have 0 credits, and I'm currently using Deepseek through Openroute on Sillytavern right now.
 

desmosome

Conversation Conqueror
Sep 5, 2018
6,486
14,807
Depends on what front-end your using. Some are easier to setup than others. It would likely be best to look it up on Reddit. But, if your using ST:

- Click the plug Icon (Top, second option)
- Under API, select either Chat Completion, or Text Completion, locate the Openrouter option
- Under Open-router model, find a free option (they're marked as free, or you look on the OR website.)
- Click "Authorize", it'll open up the OR website, just accept. Since it's free, don't bother setting a limit.

Wait for the plug to stop blinking = Done. You'll see a green circle at the bottom of the plug tab saying "Valid".

If your using Janitor AI, I did it once, But I don't remember all the steps. I believe you have to go to Openrouter settings, select a default model, create an API key, then go to Janitor AI and fill it out there. I don't use Janitor AI in general, so I dunno the best practices. When I tried it, I had to find a card that allowed proxies, then browse through the settings on the card itself to set the model using the API i created. It was a hassle tbh.

I have 0 credits, and I'm currently using Deepseek through Openroute on Sillytavern right now.
I'm using on chub. I generate the key. It says key valid. But it says I lack credits. Why does it need credits if its free model.
 

vidzero

Newbie
Jul 9, 2017
15
8
Depends on what front-end your using. Some are easier to setup than others. It would likely be best to look it up on Reddit. But, if your using ST:

- Click the plug Icon (Top, second option)
- Under API, select either Chat Completion, or Text Completion, locate the Openrouter option
- Under Open-router model, find a free option (they're marked as free, or you look on the OR website.)
- Click "Authorize", it'll open up the OR website, just accept. Since it's free, don't bother setting a limit.

Wait for the plug to stop blinking = Done. You'll see a green circle at the bottom of the plug tab saying "Valid".

If your using Janitor AI, I did it once, But I don't remember all the steps. I believe you have to go to Openrouter settings, select a default model, create an API key, then go to Janitor AI and fill it out there. I don't use Janitor AI in general, so I dunno the best practices. When I tried it, I had to find a card that allowed proxies, then browse through the settings on the card itself to set the model using the API i created. It was a hassle tbh.

I have 0 credits, and I'm currently using Deepseek through Openroute on Sillytavern right now.

I just got DeepSeek to work through openrouter, that was a fun process lol.

Do you have any ChatCompletion presets for your setup? Anyone profile you can share?
 

desmosome

Conversation Conqueror
Sep 5, 2018
6,486
14,807
JAI is outdated in many ways and lacking any V2 features, but the most annoying thing about using this site is that it has no option to delete truncated sentences, nor does it have a continue button that finishes off a truncated sentences. You have to fucking delete that partial sentence manually every single message, regardless of what settings you use for "max new tokens".
 

vidzero

Newbie
Jul 9, 2017
15
8
JAI is outdated in many ways and lacking any V2 features, but the most annoying thing about using this site is that it has no option to delete truncated sentences, nor does it have a continue button that finishes off a truncated sentences. You have to fucking delete that partial sentence manually every single message, regardless of what settings you use for "max new tokens".
I noticed that as well, it is rather annoying. SpicyChat seems to automatically moderate truncations from what I can see.
 

chainedpanda

Active Member
Jun 26, 2017
659
1,182
I'm using on chub. I generate the key. It says key valid. But it says I lack credits. Why does it need credits if its free model.
Absolutely no clue about Chub. I've never used it. It clearly works, not only do I use it, as well as numerous people making guides on Reddit. You may need to ask Reddit, but there's a good chance that the process may be a bit convoluted, since it's pretty convoluted on JanitorAI as well.

I just got DeepSeek to work through openrouter, that was a fun process lol.

Do you have any ChatCompletion presets for your setup? Anyone profile you can share?
That's not an easy answer. First, it depends on what kind of preset you want, even then it's annoying.

If your looking for setting presets, then no, I don't have any. If you switch to text completion there's options, including built-in presets. I don't fully understand the differences between chat and text completion, something to do with the intended purpose? Either way, both work fine regardless of use case, and until literally earlier today I was just using text completion. Additionally, if you switch to Text completion, the options are much fewer, and less complex. The setup process is identical.

That said, some paid models restrict which settings can be adjusted, no idea why. If you see your model page on Openrouter and expand the provider section, you can see which settings the providers allow you to alter. Additionally, if your on the overview tab and scroll down, you'll find a "Recommended parameters" section, that's what I used. (For the record, I still refer to the free options on Openrouter as paid, there just isn't a better term for them yet.)
(Tip: Generally the most important option is temperature, it basically adjusts how random the AI is. Most people set it between 0.5-1.2 range is for most models.)

If you're looking for prompt presets, the answer is both yes and no. There are plenty of available presets out there, but most of them are for local LLMs. Although there are local variants of Deepseek that should/will work with those available on openrouter, Deepseek is way too new for there to be anything useful. Even the available option in ST is outdated and based on an old version. For now, I would recommend either sticking with the 2.5, leaving it as default, or using the simple_proxy_for_tavern option. At least until ST updates to have the updated V3 or R1 variants.
 
Last edited:

vidzero

Newbie
Jul 9, 2017
15
8
Absolutely no clue about Chub. I've never used it. It clearly works, not only do I use it, as well as numerous people making guides on Reddit. You may need to ask Reddit, but there's a good chance that the process may be a bit convoluted, since it's pretty convoluted on JanitorAI as well.



That's not an easy answer. First, it depends on what kind of preset you want, even then it's annoying.

If your looking for setting presets, then no, I don't have any. If you switch to text completion there's options, including built-in presets. I don't fully understand the differences between chat and text completion, something to do with the intended purpose? Either way, both work fine regardless of use case, and until literally earlier today I was just using text completion. Additionally, if you switch to Text completion, the options are much fewer, and less complex. The setup process is identical.

That said, some paid models restrict which settings can be adjusted, no idea why. If you see your model page on Openrouter and expand the provider section, you can see which settings the providers allow you to alter. Additionally, if your on the overview tab and scroll down, you'll find a "Recommended parameters" section, that's what I used. (For the record, I still refer to the free options on Openrouter as paid, there just isn't a better term for them yet.)
(Tip: Generally the most important option is frequency, it basically adjusts how random the AI is. Most people set it between 0.5-1.2 range is for most models.)

If you're looking for prompt presets, the answer is both yes and no. There are plenty of available presets out there, but most of them are for local LLMs. Although there are local variants of Deepseek that should/will work with those available on openrouter, Deepseek is way too new for there to be anything useful. Even the available option in ST is outdated and based on an old version. For now, I would recommend either sticking with the 2.5, leaving it as default, or using the simple_proxy_for_tavern option. At least until ST updates to have the updated V3 or R1 variants.
Interesting, a lot to parse there. I just wish DeepSeek had better servers, the congestion around night time is pretty insane lol. Cmon China, you can afford more servers...
 

desmosome

Conversation Conqueror
Sep 5, 2018
6,486
14,807
Absolutely no clue about Chub. I've never used it. It clearly works, not only do I use it, as well as numerous people making guides on Reddit. You may need to ask Reddit, but there's a good chance that the process may be a bit convoluted, since it's pretty convoluted on JanitorAI as well.
Only thing I find on reddit is that you need to have some credits on your account even for free version (it wont use the credits but u need to have it). I won't be doing that.
 

Mr_Ainz

Member
Oct 26, 2017
342
572
Man I really appreciate the help people provide on this subject, here on reddit, etc. It's just that I get all hyped about trying it for myself and then I get discouraged by the current limitations of the tech.

It's good, but not good enough to merit the costs (personal opinion)
 

chainedpanda

Active Member
Jun 26, 2017
659
1,182
Man I really appreciate the help people provide on this subject, here on reddit, etc. It's just that I get all hyped about trying it for myself and then I get discouraged by the current limitations of the tech.

It's good, but not good enough to merit the costs (personal opinion)
I kind of agree tbh. However, AI is improving daily, and for the most part, I think it's already powerful enough for the vast majority of use cases. However, the problem IMO is that the tech is increasing too fast that the consumer side cannot keep up. You basically need a bachelors degree just to optimize it, and honestly, instead of getting easier, it feels like it's becoming increasingly complex.

As for pricing, I agree the prices are hard to cope with. Especially depending on how you use AI. However, we are in a very interesting point in AI development. Deepseek has completely shook the industry. Not only is their paid model on par with top models, they also offer it for free. If that wasn't enough, they also released several open source variants for public use that surpass most available local LLMs.

There really isn't anyway to tell how the industry is going to develop from this point onwards. This isn't even including the current American political climate. Many tech companies are changing their policies with the new president. Plus, several major AI companies have already begun seeking deals with the new president, that's likely to shake the industry as well given time.

Are there any risks for your account when using Deepseek for NSFW AI roleplay?
I haven't experienced any thus far. I've talked about some pretty unethical stuff, yet I've experienced no issues. Even on the deepseek website, I've received no warnings thus far, just the bots response getting deleted and replaced with a message claiming it can't discuss something.

When using a 3rd party API, I've not had any issues either. In fact, I didn't even need to sign into my Deepseek account to access it.
 

nudepx

New Member
Feb 19, 2025
6
67
How would you go about running a local LLM in the first place? Any tips on how or where to get started?
Ollama for downloading and running models, OpenChat for GUI similar to chatGPT.
Very easy to setup and ollama has some uncensored LLM in it's library.
 

vidzero

Newbie
Jul 9, 2017
15
8
Are there any risks for your account when using Deepseek for NSFW AI roleplay?
I noticed that on the main DeepSeek website, once it deletes one of your posts for lewdity, you suddenly start getting "server busy" messages. I think it dumps ERPers in to a "pervert que" for lack of a better term.
 
  • Wow
Reactions: D0v4hk1n

desmosome

Conversation Conqueror
Sep 5, 2018
6,486
14,807
I actually got openrouter to work on JAI. Trying the free models, but not all of them are tuned for roleplay and return gibberish. MythoMax(free) is also complete garbage for me. Not sure why since it's supposed to be the best model for roleplay. The various free versions of Gemini are really good, but it's censored to hell. Jailbreak sometimes works but it still always cuts off the message when it realizes it's writing NSFW shit.

I'm sure spending money on it could lead to good results, but for free proxies, the colab link I listed gave me better experience so far. I really like Violet Twilight(12b) and Mag-Mell(12b) from there. I want to try the 22B models from there, but they never seem to work for me.