0tt0von

Member
Dec 19, 2022
228
412
It's good that the Dev is constantly working on the game and that the game is getting constant updates, but it would be nice if some of those updates moved the main story forward and were not only improvements to the sandbox.
 
Jun 3, 2023
29
9
You're the only one I've seen an example of a model that's been used. I tried this model Moistral-11B-v3-f16 and it works very slowly, 2-3 words per minute. Does it need a very powerful computer or what? Everything else I tried with this kobold didn't work at all. What models are used for this, how to use them, nothing is clear.
how do you hook up the koboldcpp i tried and i can't get it to work
 

myxlmynx

New Member
Oct 26, 2017
4
5
A few local models I've tried:
  • (Q3_K_S) worked very well in story mode, but tended to be a bit liberal with the emojis on sandbox mode;
  • (Q4_K_S) and variants seemed to have better overall consistency;
  • (Q4_K_M) had some of the best writing on a few of scenes, but it pretty much avoided using most emojis.
For anyone who is struggling to make local models work: you need a GGUF small enough to leave some free space for the context and general video usage (there's an online calculator at ). For instance, I have 8G VRAM, so I only try GGUF files under 6GB, and most under 5GB.

I suggest first trying to make koboldcpp itself work with a very small model; is one of the smallest I know that I can still have fun roleplaying with. Load that one in Koboldcpp, open the web interface, click Scenarios, and chat a little bit with Tiff or Nail. CPU usage should be fairly low.

You're the only one I've seen an example of a model that's been used. I tried this model Moistral-11B-v3-f16 and it works very slowly, 2-3 words per minute. Does it need a very powerful computer or what? Everything else I tried with this kobold didn't work at all. What models are used for this, how to use them, nothing is clear.
That particular quant is very big; it does need either a powerful GPU or CPU, and plenty of RAM. I'd guess it's falling back to CPU.

As a reference, with a similar 11B model I have here on a smaller quant (Fimbulvetr-11B-v2.1-16K.i1-Q3_K_S.gguf), I can get the first Laura reply after ~75s, and ~1 word per second during generation. And I have a five year old AMD iGPU (Vulkan backend).

what's the best free api to use on this
gemini-1.5-pro works best, but for me it stops working after a few minutes. In that case I switch either to gemini-1.5-flash or to a local model.

how do you hook up the koboldcpp i tried and i can't get it to work
Have you got Koboldcpp itself working on as described in ?
 

youraccount69

Engaged Member
Donor
Dec 30, 2020
3,691
1,584
Multiic-0.3.67
You don't have permission to view the spoiler content. Log in or register now.
rpdl torrents are unaffiliated with F95Zone and the game developer.
Please note that we do not provide support for games.
For torrent-related issues use here, or join us on !
, . Downloading issues? Look here.​
 
Jun 3, 2023
29
9
A few local models I've tried:
  • (Q3_K_S) worked very well in story mode, but tended to be a bit liberal with the emojis on sandbox mode;
  • (Q4_K_S) and variants seemed to have better overall consistency;
  • (Q4_K_M) had some of the best writing on a few of scenes, but it pretty much avoided using most emojis.
For anyone who is struggling to make local models work: you need a GGUF small enough to leave some free space for the context and general video usage (there's an online calculator at ). For instance, I have 8G VRAM, so I only try GGUF files under 6GB, and most under 5GB.

I suggest first trying to make koboldcpp itself work with a very small model; is one of the smallest I know that I can still have fun roleplaying with. Load that one in Koboldcpp, open the web interface, click Scenarios, and chat a little bit with Tiff or Nail. CPU usage should be fairly low.



That particular quant is very big; it does need either a powerful GPU or CPU, and plenty of RAM. I'd guess it's falling back to CPU.

As a reference, with a similar 11B model I have here on a smaller quant (Fimbulvetr-11B-v2.1-16K.i1-Q3_K_S.gguf), I can get the first Laura reply after ~75s, and ~1 word per second during generation. And I have a five year old AMD iGPU (Vulkan backend).



gemini-1.5-pro works best, but for me it stops working after a few minutes. In that case I switch either to gemini-1.5-flash or to a local model.



Have you got Koboldcpp itself working on as described in ?
I have used the koboldcpp and it still never worked, Like it would install and i could turn on kobold but the localhost would not even when i used a ggup that was around 1 gigabyte
 

roland

Newbie
Jul 1, 2017
33
9
got a nice run with laura in bath .)
but at the end it triggers the end of the prologe
so josie, dina and fiona events are cut out
intendet or bug ?

2nd
i got around 10(not sure, but more then 5) of this mutiple choice thingies to choose from
these choices need around 2-3 minutes to pop up, each
 

slavegal

Member
Apr 17, 2020
444
549
What an innovative game, but the experience is somehow frustrating
it is super lagging; I need to wait minutes for the AI's reply, the conversation is confusing, the pronouns are messy(due to AI again, I guess), and it is challenging to tell who says what when it comes to a multi-character scenario. Despite those, the story is intriguing even though it is more like a premise than a story.
 

Denkader

Member
Aug 13, 2024
200
310
Aeneasc

Based on your review, I'm guessing you never reached the poetry event?

I recommend you keep going. You'll find that those sudden changes of scenery you mention are not as arbitrary as they seem ;)
 

Varggoth

New Member
Jan 18, 2018
2
0
I guess when you have enough experience running LLMs on your own GPU and chat with different bots, learning how Kobold and SillyTavern works, you begin to understand a lot of things about this game. About how it works. And a few words for the developer: Слава Україні, шановний!
 

xbeo

Newbie
May 10, 2017
66
63
What exactly triggers the girls to undress (change image) from a technical standpoint? This seems to work well with the Gemini models, but not at all with other models.
 

Thelec63

New Member
Jun 3, 2022
2
0
Im using local model, working good with text etc ...
But can we with local model have new ai generated images ? like infinity possibility ? Because i can trigger only the same images of the character every time
 

Krama

Newbie
Oct 19, 2019
19
9
У меня в начале 1 дня случается ядерный взрыв (После того как отказываюсь от тренировки или сразу после тренировки с Лорой) и игра начинается сначало.

At the beginning of day 1, I have a nuclear explosion (After I refuse training or immediately after training with Laura) and the game starts all over again.
 

roland

Newbie
Jul 1, 2017
33
9
У меня в начале 1 дня случается ядерный взрыв (После того как отказываюсь от тренировки или сразу после тренировки с Лорой) и игра начинается сначало.

At the beginning of day 1, I have a nuclear explosion (After I refuse training or immediately after training with Laura) and the game starts all over again.
play a bit longer after the boom, somewhere before you get to Fiona bath scene you can choose to play day 0 again or go for next chapter
 

dea667a

New Member
May 3, 2021
14
9
The regenerate button doesnt exist or is supporters only? Sometimes the model goes crazy =)

Anyway, I think it would be good to have an icon showing "waiting for the model" or something. When I am using my own I can see it is generating, but for people using the interwebz they have no clue. I will for sure keep an eye on this game for the future, this looks great.
 
4.30 star(s) 12 Votes