Seeking AI girlfriend chat similar to crushon.ai but free and with no message limits

vagex

New Member
Feb 25, 2018
7
4
hello everyone, is there similar site like crushon.ai where you can chat with different ai characters that allows +18 nsfw contents( sexting, horniness roleplay photos etc. but with no message limits and free?
 

Blakeness8

Newbie
Apr 20, 2021
69
57
The one i've been using is But AFAIK There isn't a photos option, Just chatting. The role play is decent but you can only get a response at a time, so no dual chatting with different bots simultaneously and the wait times of message creation depends on the traffic of the page, sometimes the reply is instant, sometimes it takes up to 5 minutes, though its really weird to take 5 minutes. Also the search feature is really REALLY bad so more often than not you'll be searching every single page till you found what you want. I'll give it a 7/10 just because its free and depending on the bot you can get a nice story line going. (For NSFW chats make sure to toggle "all". And make sure to select JanitorLLM for the API Settings of the chat if you don't have an OpenAI Key or a KoboldAI)

I'm Also searching for alternatives so if anyone else have more free sites feel free to post them

Edit: I forgot to mention, Right now the page is suffering problems, so probably not the best time to check it out. lol
 
Last edited:

vagex

New Member
Feb 25, 2018
7
4
The one i've been using is But AFAIK There isn't a photos option, Just chatting. The role play is decent but you can only get a response at a time, so no dual chatting with different bots simultaneously and the wait times of message creation depends on the traffic of the page, sometimes the reply is instant, sometimes it takes up to 5 minutes, though its really weird to take 5 minutes. Also the search feature is really REALLY bad so more often than not you'll be searching every single page till you found what you want. I'll give it a 7/10 just because its free and depending on the bot you can get a nice story line going. (For NSFW chats make sure to toggle "all". And make sure to select JanitorLLM for the API Settings of the chat if you don't have an OpenAI Key or a KoboldAI)

I'm Also searching for alternatives so if anyone else have more free sites feel free to post them

Edit: I forgot to mention, Right now the page is suffering problems, so probably not the best time to check it out. lol
what setting is the best for generating horny and realistic replies from bots?
 
Sep 12, 2021
261
397
You can try - it's free to check, have options for multiple chats, you can customize the ai in what kind of responses/roleplay it will answer, for example more shy, bubbly or even yandere. And it's really decent in my opinion BUT some of the more lovely features, like really long-term memory are paying-exclusive.
 

Blakeness8

Newbie
Apr 20, 2021
69
57
what setting is the best for generating horny and realistic replies from bots?
I'm guessing that you mean for the "Create a Character" section? In which case i really don't know. I'm not a bot creator, i just use already created bots or export ones from another site ( even though is a paying site, you can see the bots settings freely)
 

Fluffywolf

Newbie
Jul 4, 2017
22
28
What is your videocard?

I am running a chat AI locally with my 4090. It's quite awesome and I find it works even better than crushon.ai, at leasy once you get the hang of it. But you do need a decent videocard. I run a 20b text model which is a great model but 4090 is just enough for it imo. If you have a lower end videocard you might have to use a 13b text model or maybe even a 7b. Generally the lower you go, the worse they get. Or if you go higher than what your videocard can run, the slower they get.

But in any case, it is free and it is private.

Anyway, here is the video I stumbled on that helped me set it up:



A few things that I learned that I would add to the video guide concerning roleplay chats (this is for after you've followed the video and installed the web-ui)

1.
There are many models ranging from 7b to 70b. If you go for AWQ models (which are the fastest if you have the vram for it) it basically means 7b needs 7GB vram, 70b needs 70gb vram, etc. So while I can run a 33b model on my 4090, it would go at the cost of speed. 20b seems just right for a 4090.

There are also CPU models you could try, but I haven't played around much with those.

For my model I use TheBloke_Emerhyst-20B-AWQ
- I found it to be the best working one. from the 20 or so I tried out. I load it with AutoAWG loader with max_seq_len 4096
But this is when you use a 4090 that has 24gb vram. It it not as fast as I can read, but its speed is just about tolerable for me. Any lower and I would lose immersion. But perhaps you don't mind a bigger delay if it means better results, so just try it out I guess.

The best 13b model that I found for roleplay is this one:
TheBloke/MythoMax-L2-Kimiko-v2-13B-AWQ (AutoAWG loader)

As for 7b models, I found most of them just not intelligent enough for my roleplays, but here are two you could try:
TheBloke_llama2_7b_chat_uncensored-AWQ (ExllamaV2-Hf loader)
TheBloke_Wizard-Vicuna-7B-Uncensored-AWQ (AutoAWG loader)

The max sequence length can be found on model pages.

Note that uncensored doesn't neccesarily mean the only ones capable of nsfw and also don't neccesarily mean they are good for nsfw. But you could probably do wackier stuff with them. For my needs the Emerhyst one is plenty nsfw. ;)


2. Extensions: I also enable the long_replies extension in Session tab. I find it seems to work well if I set that to say 500 for my roleplays. I also have an automatic1111 webui for AI art and tried the sd_api_pictures extension, which can be fun, but I found it interferes with my AI's replies too much. Depending on the character you want to roleplay with though, it could be a nice addition. But you would also need to install automatic1111 and run it with --api in commandline (edit the automatic1111 start batch file and set "set COMMANDLINE_ARGS=--api" )

3. Parameter preset, Midnight Enigma works best for me. I tend to choose that then play around with temperature if I get weird results. If I get really stuck in a convo on something strange, I sometimes play around with other presets, but midnight enigma seems to give me the best replies.
- I also set max_new_tokens to 2024 (my model is a 4096 model, and for this model in particular this seems to give me the best results, but you may need to set this differently depending on your model. I have read is that you want it as high as possible until the Model starts to do too many strange outputs, then you are too high.

4. Creating a character. I find that it is best if you try to keep it as short as possible. What I do is I describe the characters and players core persona with a fairly long description then I add a list of traits below. for example:

{{char}}'s persona: {{char}} is a blah blah blah and a blah blah blah, etc.
{{user}}'s persona: {{user}} is a blah blah blah and a blah blah blah, etc.

{{char}} likes xxxx
{{user}} is a xxxx

etc etc
With the stuff in persona I have everything that I don't intend to change throughout the roleplay, these are core traits and likes and dislikes that won't change. Where as the list of traits contain facts and details I do expect to change. So as the roleplay develops I can remove/add those as I see fit.

Do note that the larger you make the character template, the more it goes at the cost of the bots memory. So try not to be too verbose and repetitive and keep it simple.

5. Chat mode: in the chat window setting (below the bot) choose mode chat-instruct for best results






That should be enough to get you going. I am stil figuring out a lot of stuff myself, only been playing with this for 2 days now.

Note that you can still get some weird results. For example I had a character that kept trying to set me up with a guy and even though I added {{user}} is not gay in template, she kept doing it regardless (probably because I left it in the history messages a few times by not immediately regenerating the message she outputted but instead by responding to the message, and thus leaving the tokens in its history. But I think once you get the hang of it, you start to learn which messages to regenerate and what kind of messages to type to get your desired results.
 
Last edited:

beonk

New Member
Nov 9, 2022
8
0
chai is pretty good and has 70 message limit every 3 hours. HiWaifu has energy system but easy enough to get unlimited for free by watching reward ads
 

pythia23

Member
Aug 2, 2017
108
40
I found some enjoyment with joyland ai, but eventually hit a wall with the amount of regens/suggestions per day. janitorai has none of that, and can have multiple chats open at once (I'm doing all these chats just by going to their websites via a chrome incognito window, btw.) though you do sometimes have to be patient and keep hitting regenerate, because sometimes a message will fail to write.

like I said though, I'm only doing this shit on a browser. I see people talking about vram and the importance of powerful video cards, and I have no idea what that whole side of this is. So any advice on that front would be appreciated, lol.
 

Hardcore1234

Newbie
Mar 8, 2017
97
179
What is your videocard?

I am running a chat AI locally with my 4090. It's quite awesome and I find it works even better than crushon.ai, at leasy once you get the hang of it. But you do need a decent videocard. I run a 20b text model which is a great model but 4090 is just enough for it imo. If you have a lower end videocard you might have to use a 13b text model or maybe even a 7b. Generally the lower you go, the worse they get. Or if you go higher than what your videocard can run, the slower they get.

But in any case, it is free and it is private.

Anyway, here is the video I stumbled on that helped me set it up:



A few things that I learned that I would add to the video guide concerning roleplay chats (this is for after you've followed the video and installed the web-ui)

1.
There are many models ranging from 7b to 70b. If you go for AWQ models (which are the fastest if you have the vram for it) it basically means 7b needs 7GB vram, 70b needs 70gb vram, etc. So while I can run a 33b model on my 4090, it would go at the cost of speed. 20b seems just right for a 4090.

There are also CPU models you could try, but I haven't played around much with those.

For my model I use TheBloke_Emerhyst-20B-AWQ
- I found it to be the best working one. from the 20 or so I tried out. I load it with AutoAWG loader with max_seq_len 4096
But this is when you use a 4090 that has 24gb vram. It it not as fast as I can read, but its speed is just about tolerable for me. Any lower and I would lose immersion. But perhaps you don't mind a bigger delay if it means better results, so just try it out I guess.

The best 13b model that I found for roleplay is this one:
TheBloke/MythoMax-L2-Kimiko-v2-13B-AWQ (AutoAWG loader)

As for 7b models, I found most of them just not intelligent enough for my roleplays, but here are two you could try:
TheBloke_llama2_7b_chat_uncensored-AWQ (ExllamaV2-Hf loader)
TheBloke_Wizard-Vicuna-7B-Uncensored-AWQ (AutoAWG loader)

The max sequence length can be found on model pages.

Note that uncensored doesn't neccesarily mean the only ones capable of nsfw and also don't neccesarily mean they are good for nsfw. But you could probably do wackier stuff with them. For my needs the Emerhyst one is plenty nsfw. ;)


2. Extensions: I also enable the long_replies extension in Session tab. I find it seems to work well if I set that to say 500 for my roleplays. I also have an automatic1111 webui for AI art and tried the sd_api_pictures extension, which can be fun, but I found it interferes with my AI's replies too much. Depending on the character you want to roleplay with though, it could be a nice addition. But you would also need to install automatic1111 and run it with --api in commandline (edit the automatic1111 start batch file and set "set COMMANDLINE_ARGS=--api" )

3. Parameter preset, Midnight Enigma works best for me. I tend to choose that then play around with temperature if I get weird results. If I get really stuck in a convo on something strange, I sometimes play around with other presets, but midnight enigma seems to give me the best replies.
- I also set max_new_tokens to 2024 (my model is a 4096 model, and for this model in particular this seems to give me the best results, but you may need to set this differently depending on your model. I have read is that you want it as high as possible until the Model starts to do too many strange outputs, then you are too high.

4. Creating a character. I find that it is best if you try to keep it as short as possible. What I do is I describe the characters and players core persona with a fairly long description then I add a list of traits below. for example:

{{char}}'s persona: {{char}} is a blah blah blah and a blah blah blah, etc.
{{user}}'s persona: {{user}} is a blah blah blah and a blah blah blah, etc.

{{char}} likes xxxx
{{user}} is a xxxx

etc etc
With the stuff in persona I have everything that I don't intend to change throughout the roleplay, these are core traits and likes and dislikes that won't change. Where as the list of traits contain facts and details I do expect to change. So as the roleplay develops I can remove/add those as I see fit.

Do note that the larger you make the character template, the more it goes at the cost of the bots memory. So try not to be too verbose and repetitive and keep it simple.

5. Chat mode: in the chat window setting (below the bot) choose mode chat-instruct for best results






That should be enough to get you going. I am stil figuring out a lot of stuff myself, only been playing with this for 2 days now.

Note that you can still get some weird results. For example I had a character that kept trying to set me up with a guy and even though I added {{user}} is not gay in template, she kept doing it regardless (probably because I left it in the history messages a few times by not immediately regenerating the message she outputted but instead by responding to the message, and thus leaving the tokens in its history. But I think once you get the hang of it, you start to learn which messages to regenerate and what kind of messages to type to get your desired results.
would also recommend "L3-8B-Stheno-v3.2-Q8_0-imat.gguf" for people with low VRAM. should also use Silly Tavern for the maximum VN experience
 

pythia23

Member
Aug 2, 2017
108
40
yodayo seems to be going down the shitter. anyone got an equivalent or better website to recommend?
 
Aug 10, 2022
52
20
Could you explain a little more about what that is, for those of us who don't know? And does it have any additional features like image or voice?
So SillyTavern only provides an interface to interact with AI. You need to either run AI locally or connect to an api. The installation of SillyTavern can be followed or simply search "How to install SillyTavern" and there will be some video guides to help you out. Once you set SillyTavern up you have the following options. Run an AI model locally, use Mancer AI's free model or you can use Kobold Horde.

Running an AI locally is recommended if you have a powerful PC. There are AI models that can run on CPU. You can search online for how to run AI models locally using Kobold AI. Most of the best AI models are free on huggingface and if you have the specs, this is the best and most reliable.

As for , the free model is very outdated but still extremely good but you can squeeze about 30 - 40 messages before the AI starts forgetting too much.

My personal favorite and recommended method is to use the Kobold Horde option. It is built into SillyTavern and when selecting an API in Tavern, choose "Kobold AI Horde". Basically what this is, is very kind people host AI models that you can use with out limits and there are many amazing models to choose from. You can access the top of the line models and many other good contenders if its hosted. The major downside to this is that the wait time to generate messages can be extremely long, depending on the traffic. Some days you can get instant messages and others you have to wait around 400s, and also the people hosting the models can stop at any time. Sometimes they can host a model for months but i had experiences where it was for a couple of days before they stopped and i had to choose a new model which was not very great.
 
  • Like
Reactions: FrankWestPhoJrn
May 7, 2023
295
115
So SillyTavern only provides an interface to interact with AI. You need to either run AI locally or connect to an api. The installation of SillyTavern can be followed or simply search "How to install SillyTavern" and there will be some video guides to help you out. Once you set SillyTavern up you have the following options. Run an AI model locally, use Mancer AI's free model or you can use Kobold Horde.

Running an AI locally is recommended if you have a powerful PC. There are AI models that can run on CPU. You can search online for how to run AI models locally using Kobold AI. Most of the best AI models are free on huggingface and if you have the specs, this is the best and most reliable.

As for , the free model is very outdated but still extremely good but you can squeeze about 30 - 40 messages before the AI starts forgetting too much.

My personal favorite and recommended method is to use the Kobold Horde option. It is built into SillyTavern and when selecting an API in Tavern, choose "Kobold AI Horde". Basically what this is, is very kind people host AI models that you can use with out limits and there are many amazing models to choose from. You can access the top of the line models and many other good contenders if its hosted. The major downside to this is that the wait time to generate messages can be extremely long, depending on the traffic. Some days you can get instant messages and others you have to wait around 400s, and also the people hosting the models can stop at any time. Sometimes they can host a model for months but i had experiences where it was for a couple of days before they stopped and i had to choose a new model which was not very great.
Thanks for the detailed answer. I think I'm starting to understand. One question I do have though, is what do you consider, spec wise, to be a "powerful pc" assuming I wanted to go the local ai route?
 
Aug 10, 2022
52
20
Thanks for the detailed answer. I think I'm starting to understand. One question I do have though, is what do you consider, spec wise, to be a "powerful pc" assuming I wanted to go the local ai route?
Basically a high end gaming PC to run the good models. You would need a graphics card of 16 GB VRAM to run the decent ones and if you are running on CPU, you would at least need 16GB of RAM. Though each model has its own specifications. Go search the kobold AI subreddit for more information. is also pretty helpful.