Sarta

New Member
Jan 25, 2020
13
2
Yes. I read about Gemini. It’s easier for me to wait until you deal with the AI you were talking about. I am waiting.
 

Darkwarriorcb

New Member
Dec 25, 2017
7
2
I'm getting frustrated and doubting myself.
I've been trying to make a connection between the game and Koboldai for days.
But all I get is an error message all the time.
Error: API error: {“detail” :{“stream” :[“Unknown field.”]}}
What am I doing wrong?
I have loaded a model, I have set it to chat mode and it also recognizes the connection.
The description of the character is also recognized in Memory.
I would appreciate some advice.
 
Sep 21, 2019
45
76
I'm getting frustrated and doubting myself.
I've been trying to make a connection between the game and Koboldai for days.
But all I get is an error message all the time.
Error: API error: {“detail” :{“stream” :[“Unknown field.”]}}
What am I doing wrong?
I have loaded a model, I have set it to chat mode and it also recognizes the connection.
The description of the character is also recognized in Memory.
I would appreciate some advice.
most likely, the context has been exceeded. try setting the system context trimming to the far left position in the settings
u need a model with a context window larger than 8k
I am preparing the integration of the koboldcpp ver for gguf models into the game, larger contexts seem to work better there, release it next week in beta
 

Darkwarriorcb

New Member
Dec 25, 2017
7
2
most likely, the context has been exceeded. try setting the system context trimming to the far left position in the settings
u need a model with a context window larger than 8k
I am preparing the integration of the koboldcpp ver for gguf models into the game, larger contexts seem to work better there, release it next week in beta
Thank you very much for your help.
Now it works.
but is it normal that it takes about 10 minutes for an answer?
I don't have a weak computer, it runs the latest games with high graphics settings without any problems.
Have I perhaps forgotten another setting?
 
Sep 21, 2019
45
76
Thank you very much for your help.
Now it works.
but is it normal that it takes about 10 minutes for an answer?
I don't have a weak computer, it runs the latest games with high graphics settings without any problems.
Have I perhaps forgotten another setting?
10 minutes, not seconds? and pls tell me what model?
 

Denkader

Member
Aug 13, 2024
117
188
10 minutes, not seconds? and pls tell me what model?
Definitely minutes. I've had this problem as well. In sandbox mode with Laura in my case (haven't tried the other models yet) using Gemini.

What happens is that it starts fine, but the longer you play and the slower the responses get. I didn't actually time it, but I could easily get up, go pour myself a drink, maybe go to the restroom, and when I'd come back there still wouldn't be an answer.

Another thing that happens a lot is that you'll wait those ten minutes and then you get a blank answer. You click and it prompts you for your input, as if it totally ignored what you wrote. I usually copy-paste it back in. Often it'll do the same thing though, wait 6-10 minutes, empty response, ask for your input. After 2-3 times of this, I'll save, shut down the game, restart, load, and it'll work again fine until it starts getting stuck again. It's a bit of a pain.

And by the way...

saving and loading only work in story mode and with hardcoded text; unfortunately, anything generated by the neural networks isn't saved yet.
for sandbox mode (character selection from the main screen) and long dialogue sessions within the story, this feature is still in development. the main challenge is to preserve the storyline within the context window without overloading it, this depends on the model, as some have very small context, which makes it hard to keep the whole story without summarizing or omitting details
This surprised me as I've been saving my sandbox progress with Laura and so far she does seem to remember all that's happened before. Happy about that too as it'd be annoying having to start over from scratch heh.
 

Darkwarriorcb

New Member
Dec 25, 2017
7
2
10 minutes, not seconds? and pls tell me what model?
I have tried this model: Janeway FSD 6.7B
I wish it had been seconds but it was minutes. Therefore I assumed that a setting was still missing.
I will try again with the new beta later, I hope it works without problems for me.
Keep up your great work(y)


EDIT:
I have now tested the new beta and have to say that it works well and quickly so far.
but unfortunately the answers of the module I tested are not exactly accurate.
I asked how Fiona is doing.
I got the following answer.

Maria?
I want to watch TV!
But that tastes good.
I love pizza.

I think I'll try another module.
mistral-7b-instruct-v0.1.Q5_K_M
unfortunately didn't work for me.
 
Last edited:

Klony

Newbie
Oct 8, 2019
42
37
y, there are issues with the minigame on android (
I'll try to add jumping, improve performance, fix the controls, and fix weapon selection within the next 1-2 updates.
the idea is to make the minigame somewhat souls-like; the reward is small, but giving it without completing the minigame would be unfair

pls let me know the specific moments where you couldn't progress. the skip dialog button currently only works in the sexualized long segments. I have already fixed the issue with the "⏩" trigger in an upcoming update and will double-check it

stream mode allows u to receive responses in real-time—the bot's reply is transmitted in chunks of text as it is being generated, rather than all at once after the request processing is complete

thank u )
I killed every available enemy on the map and escorted the princess to the start position, but the game didn't end. How do you win this?
 

craigman1211

Newbie
Apr 7, 2018
17
61
Really cool concept, works great, I would like to talk to you about the Local LLM topic for a moment and try to make a case for it. What you have created here is a novel approach to a concept that I think is in very high demand right now, KoboldCPP, LlamaCPP, Ollama, TheBlokeAI, Silly Tavern, Chub.ai, and many others... all of these discrords are about locally run models and they number in the 10's of thousands of active memebers. Silly tavern and Kobold are both highly focused on offering an experience similar to what you are offering, but given the currated nature of your game I belive you are offering something supirior to what they are building. Although the scope is narrow, if you can proof of concept a game like this, and make it accessible to people who dont want to spend hundreds of dollars on generating tokens I think you have something that will be very popular. I understand that it isnt as simple as just adding support and that it could take some extra effort and programming, but if you intend to make anything in the future along this same line then a local llm will be super attractive to many of the enthusiasts out there. As for the kind of computer that you need for a local llm, I have a computer running a 980ti and I am able to get a 7b model working very well with kobold cpp, I would say the bottom threshold for a local llm is around 4-6 gb of vram, although you could run it off of cpu, but that is slower. The issue is not really power, its time, when you have a low powered system you must be patient cause responses could take a minute or 2 to generate depending on the context length. Just please give it some thought, I really think it will add tremendous value to your game, and it likely will not be as difficult to integrate as you might think.
 
Sep 21, 2019
45
76
Really cool concept, works great, I would like to talk to you about the Local LLM topic for a moment and try to make a case for it. What you have created here is a novel approach to a concept that I think is in very high demand right now, KoboldCPP, LlamaCPP, Ollama, TheBlokeAI, Silly Tavern, Chub.ai, and many others... all of these discrords are about locally run models and they number in the 10's of thousands of active memebers. Silly tavern and Kobold are both highly focused on offering an experience similar to what you are offering, but given the currated nature of your game I belive you are offering something supirior to what they are building. Although the scope is narrow, if you can proof of concept a game like this, and make it accessible to people who dont want to spend hundreds of dollars on generating tokens I think you have something that will be very popular. I understand that it isnt as simple as just adding support and that it could take some extra effort and programming, but if you intend to make anything in the future along this same line then a local llm will be super attractive to many of the enthusiasts out there. As for the kind of computer that you need for a local llm, I have a computer running a 980ti and I am able to get a 7b model working very well with kobold cpp, I would say the bottom threshold for a local llm is around 4-6 gb of vram, although you could run it off of cpu, but that is slower. The issue is not really power, its time, when you have a low powered system you must be patient cause responses could take a minute or 2 to generate depending on the context length. Just please give it some thought, I really think it will add tremendous value to your game, and it likely will not be as difficult to integrate as you might think.
thank u ) the local models with koboldcpp are under development. this week, I’ll implement streaming output and proper chat mode behavior. I expect that in 2–3 weeks, the functionality for local models will approach that of the standard models in the game and move out of beta
 
  • Like
Reactions: Darkwarriorcb

craigman1211

Newbie
Apr 7, 2018
17
61
thank u ) the local models with koboldcpp are under development. this week, I’ll implement streaming output and proper chat mode behavior. I expect that in 2–3 weeks, the functionality for local models will approach that of the standard models in the game and move out of beta
Thats great to hear! I think thats an amazing decision. Can't wait to try it out.
 
3.80 star(s) 4 Votes