BackFatMan
New Member
- Jun 25, 2020
- 10
- 4
- 85
can you screenshot your log tab? There was probably an errorThis is all I see when I press the start game button
"basd" instead of "based". Probably doesn't trip up the AI though.Note that stat changes are optional, only update stats that actually change basd on the game events.
fixed next updateNoticed a small typo in the Stat Updates prompt:
"basd" instead of "based". Probably doesn't trip up the AI though.
herecan you screenshot your log tab? There was probably an error
strange! Do you get the same bug if you play the web version?here
what model did you use on openrouter?i also getting the same bug in the client or the web, only happens when i try to set up the AI locally using openrouter
mistralai/mistral-7b-instruct:freewhat model did you use on openrouter?
can you screenshot your endpoint settings? Strange it should workmistralai/mistral-7b-instruct:free
yeah, surecan you screenshot your endpoint settings? Strange it should work
This is standard with LLM settings, max memory = the total amount of memory the model can have
Max output = the amount of text the model can print out. For most models, the max output is only a small fraction of the max memory. For example mistral nemo has a whooping 128K memory! But can only output max 16K.
wait so mistral can only do 16k as a endpoint or was that just as an example
mistral nemo has a maximum of 128K memory, but can only write up to 16k tokens in one gowait so mistral can only do 16k as a endpoint or was that just as an example
the world text is too long and exceeds the default AI memory limit. There's no way to work around this, you need to use your own AI model:For Fantasy Futanari I get the error(Request failed (400). Either model name is wrong or memory limit exceeded model limit.).
How do I fix it?
alright, i figure it out. had to create a new api key, through Mistral page, a idiot, i amyeah, sure
never mind it stopped working againalright, i figure it out. had to create a new api key, through Mistral page, a idiot, i am
for that scenario can you look at the world rules and see if theres anything i could trim for it to reply faster for some reason thats the only secarnio atm that thats way to long to load replays i would just gut what i can but no clue how badly it would break itthe world text is too long and exceeds the default AI memory limit. There's no way to work around this, you need to use your own AI model:You must be registered to see the links
there's a daily usage limit for free openrouter modelsnever mind it stopped working again
That's my world, sorry. It needs well above ~4,096 context length to run. I usually run it at 16,384, but the longer the better obviously. I'd like to use the Dictionary feature to cut down a bit, but I'm not quite sure how to best utilize it in its current state.for that scenario can you look at the world rules and see if theres anything i could trim for it to reply faster for some reason thats the only secarnio atm that thats way to long to load replays i would just gut what i can but no clue how badly it would break it