The problem with that one is that it only runs on powerful online AI's, so while it reads better, you are gatekept by having to acquire credits.Fox's Infinite Worlds
how do i do that exactly cuz i can handle alot but if i need to trim some tokens to get a bit faster respones time then i will cuz i relly have no clue what im doing in lms other then more tokens= more story. so like what exactly is gpu offload out of 30 and cpu thread pool size meanit sounds like the in-game memory limit was set too high, make sure the memory limit is set to what your PC can handle
I'm not experienced with LM Studio, only used it a few times. Hopefully you can get help with the LM Studio parameters on their discord or forumhow do i do that exactly cuz i can handle alot but if i need to trim some tokens to get a bit faster respones time then i will cuz i relly have no clue what im doing in lms other then more tokens= more story. so like what exactly is gpu offload out of 30 and cpu thread pool size mean
ah it seemed easy following the guide someone postedI'm not experienced with LM Studio, only used it a few times. Hopefully you can get help with the LM Studio parameters on their discord or forum
you can type 'continue'A "continue" button would go a long way, sometimes it just stops where it'd be better to keep going. Even with multiple paragraphs enabled.
Thats strange, you don't see this warning?One thing fierylion should consider is to add a few small validations on game load, to make sure the save being loaded is from the active world.
Adding versioning to the world definition and validating the save's 'world version' should also be considered
Just a warning will suffice when detecting a mismatched world or version (at least for now)
how old is your pc it shouldnt be that much power to run it localyAnd the server went down again... Does anyone know or have a solution for an online server? My PC is as good as a microwave to run a local server
thats output tokens correct cuz ya that other story takes a good chuck to come up with it respones and i have no idea if i mess with the world setting and remove how many promts it tires to give cuz it gives like more then 5 auto responesit sounds like the in-game memory limit was set too high, make sure the memory limit is set to what your PC can handle
And the server went down again... Does anyone know or have a solution for an online server? My PC is as good as a microwave to run a local server
What determines how long of a response the AI will write? Is that memory/token configs alone or is there something in the world/game settings too?you can type 'continue'
It's not really possible to allow the AI to continue writing from the previous game text, most AI endpoints don't support this unfortunately.
max output tokens in the settingsWhat determines how long of a response the AI will write? Is that memory/token configs alone or is there something in the world/game settings too?
I'm a little confused by that setting, what's the difference between "max memory" and "max output tokens"? It seems that max output tokens depends on being smaller than max memory, why have two separate settings?max output tokens in the settings
However, by default AI will only produce one paragraph of gametext. You can disable this and allow long responses by unchecking One Paragraph Responses in the setting
This is standard with LLM settings, max memory = the total amount of memory the model can haveI'm a little confused by that setting, what's the difference between "max memory" and "max output tokens"? It seems that max output tokens depends on being smaller than max memory, why have two separate settings?
what do you mean it loads for a few seconds then stop? Like do you see any text being printed? Is there error message?I can't get passes the start game button it just doesn't load anything is the game having an issue or did I do something bad, I was working just fine then I just stopped working. I hit the start game button; it loads for a few seconds then stop without loading anything