most likely, the context has been exceeded. try setting the system context trimming to the far left position in the settingsI'm getting frustrated and doubting myself.
I've been trying to make a connection between the game and Koboldai for days.
But all I get is an error message all the time.
Error: API error: {“detail” :{“stream” :[“Unknown field.”]}}
What am I doing wrong?
I have loaded a model, I have set it to chat mode and it also recognizes the connection.
The description of the character is also recognized in Memory.
I would appreciate some advice.
Thank you very much for your help.most likely, the context has been exceeded. try setting the system context trimming to the far left position in the settings
u need a model with a context window larger than 8k
I am preparing the integration of the koboldcpp ver for gguf models into the game, larger contexts seem to work better there, release it next week in beta
10 minutes, not seconds? and pls tell me what model?Thank you very much for your help.
Now it works.
but is it normal that it takes about 10 minutes for an answer?
I don't have a weak computer, it runs the latest games with high graphics settings without any problems.
Have I perhaps forgotten another setting?
Definitely minutes. I've had this problem as well. In sandbox mode with Laura in my case (haven't tried the other models yet) using Gemini.10 minutes, not seconds? and pls tell me what model?
This surprised me as I've been saving my sandbox progress with Laura and so far she does seem to remember all that's happened before. Happy about that too as it'd be annoying having to start over from scratch heh.saving and loading only work in story mode and with hardcoded text; unfortunately, anything generated by the neural networks isn't saved yet.
for sandbox mode (character selection from the main screen) and long dialogue sessions within the story, this feature is still in development. the main challenge is to preserve the storyline within the context window without overloading it, this depends on the model, as some have very small context, which makes it hard to keep the whole story without summarizing or omitting details
I have tried this model: Janeway FSD 6.7B10 minutes, not seconds? and pls tell me what model?
I killed every available enemy on the map and escorted the princess to the start position, but the game didn't end. How do you win this?y, there are issues with the minigame on android (
I'll try to add jumping, improve performance, fix the controls, and fix weapon selection within the next 1-2 updates.
the idea is to make the minigame somewhat souls-like; the reward is small, but giving it without completing the minigame would be unfair
pls let me know the specific moments where you couldn't progress. the skip dialog button currently only works in the sexualized long segments. I have already fixed the issue with the "⏩" trigger in an upcoming update and will double-check it
stream mode allows u to receive responses in real-time—the bot's reply is transmitted in chunks of text as it is being generated, rather than all at once after the request processing is complete
thank u )
u need to approach the large trex statueI killed every available enemy on the map and escorted the princess to the start position, but the game didn't end. How do you win this?
thank u ) the local models with koboldcpp are under development. this week, I’ll implement streaming output and proper chat mode behavior. I expect that in 2–3 weeks, the functionality for local models will approach that of the standard models in the game and move out of betaReally cool concept, works great, I would like to talk to you about the Local LLM topic for a moment and try to make a case for it. What you have created here is a novel approach to a concept that I think is in very high demand right now, KoboldCPP, LlamaCPP, Ollama, TheBlokeAI, Silly Tavern, Chub.ai, and many others... all of these discrords are about locally run models and they number in the 10's of thousands of active memebers. Silly tavern and Kobold are both highly focused on offering an experience similar to what you are offering, but given the currated nature of your game I belive you are offering something supirior to what they are building. Although the scope is narrow, if you can proof of concept a game like this, and make it accessible to people who dont want to spend hundreds of dollars on generating tokens I think you have something that will be very popular. I understand that it isnt as simple as just adding support and that it could take some extra effort and programming, but if you intend to make anything in the future along this same line then a local llm will be super attractive to many of the enthusiasts out there. As for the kind of computer that you need for a local llm, I have a computer running a 980ti and I am able to get a 7b model working very well with kobold cpp, I would say the bottom threshold for a local llm is around 4-6 gb of vram, although you could run it off of cpu, but that is slower. The issue is not really power, its time, when you have a low powered system you must be patient cause responses could take a minute or 2 to generate depending on the context length. Just please give it some thought, I really think it will add tremendous value to your game, and it likely will not be as difficult to integrate as you might think.
Thats great to hear! I think thats an amazing decision. Can't wait to try it out.thank u ) the local models with koboldcpp are under development. this week, I’ll implement streaming output and proper chat mode behavior. I expect that in 2–3 weeks, the functionality for local models will approach that of the standard models in the game and move out of beta