CREATE YOUR AI CUM SLUT ON CANDY.AI TRY FOR FREE
x

ShinUchihaAnkit

New Member
Nov 17, 2021
1
0
Yo boss awesome game, i was wondering how do i get 5+/5 rating after sex, I've easily achieved it with some characters but with this one character I've let her take control or took control and done all but still she gives 2-3 rating
 

ThanefromKaos

Formerly 'KrisfromKaos'
Jan 30, 2023
271
244
Wow, thats alot of good new stuff, i actually love it.

Now all i wamt is Man to wear female clothing and challenge minigames(like quiz or luck games or fishing)
 
  • Like
Reactions: Ssato243

Overlord070

Well-Known Member
Jan 21, 2021
1,311
2,514
Wow, thats alot of good new stuff, i actually love it.

Now all i wamt is Man to wear female clothing and challenge minigames(like quiz or luck games or fishing)
I would love for the trans characters to have the male sex positions included, kind of weird how they don't have access to some of them.
 

kkai

Member
Dec 21, 2016
181
315
Oh I'm excited about the local LLM support! So many people assume people won't be willing to run local generative AI for stuff like this, but I love it. I already use KoboldCPP (and TabbyAPI) for Silly Tavern.
 

PrinceCydon

Member
Apr 16, 2020
124
356
Can someone explain how the AI works? Specifically, what do the options in the preferences screen mean? If I just leave them default will I still get AI chat in the game?
 
  • Like
Reactions: Dnds

XziebertX

Newbie
Jan 24, 2018
79
65
to be honest this game needs a walkthrough for every sex animation in-game (the walkthrough is only for teacher jobs, right?)

also, do ya guys have any non-con towards the MC or for MC to do?
 

Aquilez

Member
Jun 26, 2019
287
367
I thought I'd explain how the AI chatting works. The recommended LLM is a 11b which could be ran easily on 16gigs of VRAM. I'm using Kobold.

It seems the initial prompt is around 1000 tokens long. I suspect running the LLM at a context limit of ~8000 is fine. The prompt included all the stats of the characters. You can type in whatever you want in your communication with the characters in game and the LLM will generate the response. Obviously an 11b is not SUPER smart but it's good enough.


Screenshot 2024-12-30 213525.png 1735613027587.png

Sometimes it works fine and other times it crashes the game.

Whatever you type into the chatbox is weighed against the character's likes/dislikes and adds or takes away from the friendship/love points you have with the character.

They used the LLM in clever ways in the game so all the information you get in regular game chat could be gained in AI chat. If you ask for a phone number and the AI responds by saying yes, you get that character added to your contacts list in game + a friendship point boost. The same happens when you ask for interests. However, it's not a back and forth conversation. You're the one always asking the questions due to how the AI works.

I'm a little too lazy to organize this prompt, but this is the information that's sent to the AI so they can generate a response.

1735614054278.png

You can describe your character's actions and the other character you're talking to will respond with described actions of their own.
 
Last edited:

Dnds

Out to Play
Donor
Jul 5, 2017
319
406
I thought I'd explain how the AI chatting works. The recommended LLM is a 11b which could be ran easily on 16gigs of VRAM. I'm using Kobold.

It seems the initial prompt is around 1000 tokens long. I suspect running the LLM at a context limit of ~8000 is fine. The prompt included all the statYou can type in whatever you want in your communication with the character's in game and the LLM will generate the response. Obviously an 11b is not SUPER smart but it's good enough.


View attachment 4392996 View attachment 4393018

Sometimes it works fine and other times it crashes the game.
How do you configure it to make it work though? I'm getting a "something went wrong generating the answer" error.
 

Aquilez

Member
Jun 26, 2019
287
367
Was trying aihorde since its apparently the free one, even selected all models. All other ones require a base IP and port.
Yeah, it doesn't look like the horde is working for me either. I'm not getting any sort of error log to hint at what's wrong.
 
  • Like
Reactions: Dnds

kkai

Member
Dec 21, 2016
181
315
Local LLMs are free too, aside from the VRAM requirements and your power bill. On the bright side if it's winter where you are your PC can be an effective space heater!

Several backends, including KoboldCPP which I recommend, can split layers between GPU and CPU making the VRAM requirements a little easier, but it'll generate a good bit slower if you do split. Usually it's best to pick a LLM with a parameter count and a quant that allows you to fit all layers on your GPU or at least most of them. For KCPP you'll want a GGUF quant which is nice an easy since it's just one file.

It's worth learning how to use a VRAM calculator to pick an appropriate LLM quant:
 
4.10 star(s) 52 Votes