lynn24

New Member
Aug 20, 2024
6
1
thanks man i hope he gets a bigger better mac laptop so he can enjoy this game to
I don't think it's about the qulity of the in-built GPU. All newer Mac Book Pros rely on in-built GPUs, even the most recent ones.
But since this a very Mac-specific solution it might very well be not supported. The Mac Book Pro is from 2021.
 

Cryptist

Member
Aug 20, 2020
428
630
10 Gs is just too big. Many, if not most of us, won't waste the download time and memory space for more than 2 or 3 Gs. Especially for a new start.

I have almost 4 terabytes of games on my hard drive. Some I have been following through years of updates, and many of my favourites, with excellent art, smooth animation, and absorbing story lines, are still under 2 Gs.
 

reidanota

Active Member
Nov 1, 2021
625
524
10 Gs is just too big. Many, if not most of us, won't waste the download time and memory space for more than 2 or 3 Gs. Especially for a new start.

I have almost 4 terabytes of games on my hard drive. Some I have been following through years of updates, and many of my favourites, with excellent art, smooth animation, and absorbing story lines, are still under 2 Gs.
The game is 1.8GB, it's only the local version that's nearly 10GB because koboldcpp and the large language model are bundled. Configuring a local llm can be a boring task and it feels good, for a change, to have everything pre-packed and done for you.

If the author separated downloads - they could require users to set up koboldcpp themselves and fetch a gguf model from huggingface - it would be impossible to ensure, on their side, that everything worked for all users. Most applications using koboldcpp to run a local llm require you to correctly configure both kobold and the front-end. I understand their concern, doing it this way prevents potential supporters from giving up on the tech wall.
 

Mister_M

Engaged Member
Apr 2, 2018
2,590
5,356
The game is 1.8GB, it's only the local version that's nearly 10GB because koboldcpp and the large language model are bundled. Configuring a local llm can be a boring task and it feels good, for a change, to have everything pre-packed and done for you.
Does that mean I can use any gguf model that I want to with the game?
 

reidanota

Active Member
Nov 1, 2021
625
524
Does that mean I can use any gguf model that I want to with the game?
I wouldn't say otherwise, but I don't have the game installed at the moment to confirm it. Could be that the game runs koboldcpp by script and expects a specific model to be in the installation folder, but you might still be able to trick it by insttalling a different one and renaming it? Really can't say without looking at it. If the game's settings handle and let you choose the model, you could probably just point to a different gguf. Perhaps if you download the "online" version, and you have your own koboldcpp installed elsewhere, you can still use game settings to set up and launch with your existing config, saving the need to download a new model?
 
  • Heart
Reactions: Mister_M

francisdrake

Member
Feb 21, 2019
106
101
Been trying to get this to run with a local 20b model and I keep getting a prompt to Wake Up/Stay during the first conversation. What's up with that?
 

Riski23

Newbie
Jul 5, 2018
17
1
Ok, maybe my 3rd or 4th times to try play this. Last time i play on my phone is no response on the first dialogue.
 

Ogre

Member
Mar 29, 2018
353
902
I'm trying to run locally. I keep getting "error: no choices in response data", and blank lines to fill in. Along with "error: kobold failed to fetch."

I also get something like "after 4 attempts no response was receive. Please try another model." That is not the exact language.

What am I doing wrong?
 
Last edited:

reidanota

Active Member
Nov 1, 2021
625
524
I'm trying to run locally. I keep getting "error: no choices in response data", and blank lines to fill in. Along with "error: kobold failed to fetch."

I also get something like "after 4 attempts no response was receive. Please try another model." That is not the exact language.

What am I doing wrong?
Game uses a 14B model, I'm not sure this is it but my RTX 3090 sometimes chokes on 12B models or even 7B when the prompt gets too long. Don't now how this game handles context and prompting but when I last played, a good few months back, it worked for a while but after a few interactions the llm stopped responding. I'm downloading now to check it, maybe you can set a different model in settings and use something lighter, sacrificing complexity in responses to a faster and perhaps more streamlined experience?
 
3.90 star(s) 16 Votes