CREATE YOUR AI CUM SLUT ON CANDY.AI TRY FOR FREE
x

Vesanius

New Member
Feb 17, 2023
10
3
Can we get some kind of a guide? Why does everything take so long to load to the next set of text or progression? Is what we type from a narrative perspective or the character we are playing? It seems like both work but it's all very clunky
When the content is AI created the time it takes to provide the result depends on the AI model you are using. Free online models tend to be slow. Local models will be as fast as your computer is.
 

Vesanius

New Member
Feb 17, 2023
10
3
I tried some models and the results are bad, 90% of what they produce is some nonsense that has nothing to do with the game.
It feels like a lot of models are not working too well. However the model L3-8B-Stheno-v3.2-Q4_K_M-imat works really well. That's the one I'd suggest to start with as it's only just under 5 GB in size too. It's not the best model out there, but it works well for the game.
 

R1k0

Member
Sep 27, 2017
491
905
It feels like a lot of models are not working too well. However the model L3-8B-Stheno-v3.2-Q4_K_M-imat works really well. That's the one I'd suggest to start with as it's only just under 5 GB in size too. It's not the best model out there, but it works well for the game.
You don't have permission to view the spoiler content. Log in or register now.
 

Vesanius

New Member
Feb 17, 2023
10
3
You don't have permission to view the spoiler content. Log in or register now.
That's normal behaviour of KoboldCpp when the context is full. You can either reduce the character context or increase the context size in Kobold. I would recommend at least 8192. 12k if you can. I am not entirely sure why it does that but I believe it's related to the context shifting and it needs to do that to remove the old context to make room for new. Again, increase your context size in Kobold or reduce the character context in game.
 
  • Like
Reactions: R1k0

R1k0

Member
Sep 27, 2017
491
905
That's normal behaviour of KoboldCpp when the context is full. You can either reduce the character context or increase the context size in Kobold. I would recommend at least 8192. 12k if you can. I am not entirely sure why it does that but I believe it's related to the context shifting and it needs to do that to remove the old context to make room for new. Again, increase your context size in Kobold or reduce the character context in game.
Thanks for the help but 12k, 16k context size makes no difference. I guess I'm done.
 

Rigoku

Newbie
Sep 11, 2017
43
53
Thats funny game. In sandbox mode (Fiona) me, Fiona and futanari called "Ren" goes for adventure and some device with holographic interfece started flashing. Never thought I will play this type of game for story
 

SteelTarkus

New Member
May 25, 2021
1
0
From buying Fiona a pastry when my character took her to a coffee shop in sandbox mode, things spiraled to her deciding to make a pastry filled with cum and vaginal secretions (and cream), to eventually her opening an online store dedicated to selling those through a delivery app, with careful budgeting, ingredient purchases and brainstorming new product lines all factored in. Surreal. Next year I will ask her to do my taxes.
10/10
 

Wambos System

New Member
Nov 26, 2022
9
28
Hello, Developer,
I find your game visually stunning and conceptually intriguing. It seems to rely heavily on artificial intelligence, showcasing limited human imagination. While this idea initially fascinated me as an AI and fantasy enthusiast, my enthusiasm quickly faded when I started playing.

After selecting a character, I realized the interactions felt artificial, as if I were engaging with a bot rather than a dynamic AI personality. Instead of immersing me in a fantastical world, the experience felt static, pulling me back into the mechanics of a simulation.

Games are meant to offer an escape from reality, but this one lacked the natural flow and interaction needed for true immersion. I admire the creative effort behind the visuals, but the gameplay could benefit from added depth. Introducing meaningful choices and more lifelike interactions could enhance the player experience significantly.

As a player, I would enjoy a narrative-driven fantasy that combines human creativity with AI capabilities. Your game has great potential, and I believe with refinement, it can reach new heights.

Best regards, and I look forward to its development!
 

Vesanius

New Member
Feb 17, 2023
10
3
Thanks for the help but 12k, 16k context size makes no difference. I guess I'm done.
That is weird. I personally have no issue with 8-12k unless I pick 2 girls (a feature in the current Patreon version). You could try to increase your memory size in game too.
 

Mrblack187

Newbie
Jul 1, 2019
75
165
Thanks for the help but 12k, 16k context size makes no difference. I guess I'm done.
Different models have different context size and capabilities. One of my favorite models is only a 4k context which greatly limits it's use. Simply cannot push that model to 12/16k. I've got a decent system running a solid 3060 card and 8k is my comfort zone.

Also game will input 'tokens' which eat context. I would suggest breaking game into short 'chapters' or segments which are designed to be started fresh, aka close and clear past context &tokens. The first entry of a new chapter would be written by the {{user}} which is a quick recap of their past play. That should inject the important tokens after a reset without carrying past a models capability.

design chapters around an expected context length and require a restart between chapters. I expect game to have heavy tokens. I expect to inject 4k tokens to START play, then the average player is gonna have short back and forth chat each reply is less than 100 tokens so you'll get 40 +- input/output before a reset is required. I'm a writer and most of my chats are closer to the 3-400 token range (and that's cheap because I limit myself).
simple chat words are cheap, actions and scene can get expensive token-wise.

I still say this game is cutting edge and will require a high knowledge base from the player. I use silly tavern and it has a wonderful 'regenerate' button on Ai replies. basically if I'm unhappy with reply I can delete and try again. Some models handle regenerate well with wildly differing outputs and some are carbon copies.

currently running
with 8k context. Great model but too predictable, that may be helpful for a game.

fave:
wild but only 4k context max.

might suggest
9b model for average systems and you can use the q6 or 5 which is best bang for the buck system wise. Should be rather consistent and less wild with regenerate and replies. Probably needs some solid repetition penitently tho
 
  • Like
Reactions: R1k0 and Vesanius

Lucid Intent

Newbie
Donor
Oct 25, 2017
36
72
design chapters around an expected context length and require a restart between chapters. I expect game to have heavy tokens. I expect to inject 4k tokens to START play, then the average player is gonna have short back and forth chat each reply is less than 100 tokens so you'll get 40 +- input/output before a reset is required. I'm a writer and most of my chats are closer to the 3-400 token range (and that's cheap because I limit myself).
simple chat words are cheap, actions and scene can get expensive token-wise.
Full agree. For more than a one-shot encounter if you're looking for a slower burn or longer experience over time (eg; new friend to close friend to intimate) then this is how you do it well and with actual feeling. Set encounters as scenes and careful writing of contexts (Either as ongoing scene setting and/or use of the ingame contexts feature [Patreon only atm]) can let you build a longer term scenario.

This game is best experienced as a narrative rather than fast and dirty satisfaction; though that can also be done. More often than not I'll turn off the sexual specific contexts for a character to let things build slower (Default is designed to initiate a sexual scene after 10 interactions, minimum). Hells, I've even played out something like 10 scenes with a character without any sexual content simply because I was having fun with the RP and enjoying the AI responses; reminds me of my time on online text-based RP games some 25~ years ago and I don't have to wait for someone to be online to play..
 

jbomb

Member
Dec 28, 2017
226
127
Not sure where the series is supposed to end, but after waking up in multiic i get to fiona where she tries to make me remember stuff and then i get stuck in a loops. Is that just the current end? or am i missing something?
 

WillTellU

Member
Sep 21, 2018
152
152
This was interesting to try. The AI jank is real. And if you want any decent speed, you gotta run the AI locally on a GPU, which takes some fiddling, especially if it doesn't have tons of VRAM. I wonder where this game will go
 

Lucid Intent

Newbie
Donor
Oct 25, 2017
36
72
Hello. I need help please. The generates pictures we kept on the game? I can see again?
This game is art.
You're asking if there's a way to view the pictures, I think?

There is a gallery in the game for some of the images, but most of them don't appear anywhere outside of just playing the game; and they get unlocked as you play story mode. In sandbox mode you can pick the displayed image of the characters from the drop down menu on the right. Aside from that and some *really* good images the dev posts on Patreon they only appear as you play.
 
Sep 21, 2019
65
115
Not sure where the series is supposed to end, but after waking up in multiic i get to fiona where she tries to make me remember stuff and then i get stuck in a loops. Is that just the current end? or am i missing something?
in the game, there's a skip button (top right corner) that activates at such moments. another way to exit these long 'loops' is to simply express the desire to leave or proceed.

pls let me know which model you're using. if it's cosmos or a small local model, it's likely that due to the model's limitations, it can't properly process the game's triggers, preventing them from activating. I recommend a more powerful model—claude, chatgpt, gemini, or a stronger local model
 
  • Like
Reactions: jbomb

potluckfolly

New Member
Jan 30, 2023
2
1
I've been using
(Q5_1)
And it's just ok. Sometimes it ends up in an emoji loop. Not sure why. Any one have model recommendations that work well for them?
 

legaross

Newbie
Jan 31, 2019
24
23
Can you fix the format of masseges, its seems issue on the client side, with big gaps between paragraphs (like 4 or 5 lies) and huge gap after the last paragraph, its10 to 15 lines.. Screenshot_2.png
the system replies are ok, as you can see
 

jbomb

Member
Dec 28, 2017
226
127
in the game, there's a skip button (top right corner) that activates at such moments. another way to exit these long 'loops' is to simply express the desire to leave or proceed.

pls let me know which model you're using. if it's cosmos or a small local model, it's likely that due to the model's limitations, it can't properly process the game's triggers, preventing them from activating. I recommend a more powerful model—claude, chatgpt, gemini, or a stronger local model
I havent played around with options to much, there isn't much explanation on those things for people who are completely ignorant, so whatever the default is I suppose. Although will experiment with that. I did try the skip option but it still seemed to be stuck in that loop, like it wanted me to meet a certain condition first repeatedly.
Edit: Ok i think i see what you mean now. Im using Gemini 1.5 flash as i saw others recommend aswell as the UI recommend
 
Last edited:
4.20 star(s) 10 Votes