Create your AI Cum Slut -70% for Mother's Day
x
4.30 star(s) 12 Votes

BubblesTheAllMighty

Active Member
Sep 22, 2020
854
922
I think the server died again.
One moment the games working fine for few hours, now it fails to get past start game with constant ai errors and failed to process ai request notifications.
 

tacosnap123

Active Member
Mar 3, 2018
802
221
so i guess it possible for peopls custome world to be so big they take up all the memory is set my tokens to 30k and just even trying to start it says it execeded the memory limt even when setting it to single parpgrhape event. ok why is all the memory getting eaten by output tokens this wasnt a problem last night on version 10.
 
Last edited:

Nouvi

Member
Jul 24, 2018
146
268
so i guess it possible for peopls custome world to be so big they take up all the memory is set my tokens to 30k and just even trying to start it says it execeded the memory limt even when setting it to single parpgrhape event. ok why is all the memory getting eaten by output tokens this wasnt a problem last night on version 10.
Sounds like you need to increase the context length in LMStudio, not change endpoint stuff in the game. If you're having those issues in the online version, that world is basically not supported there.
If you're running LMStudio check the console log there if it says something like "Needed x tokens, but only y is allowed"
 

tacosnap123

Active Member
Mar 3, 2018
802
221
Sounds like you need to increase the context length in LMStudio, not change endpoint stuff in the game. If you're having those issues in the online version, that world is basically not supported there.
If you're running LMStudio check the console log there if it says something like "Needed x tokens, but only y is allowed"
i was adjusting the context length. but still even on my little world i made it seems like everything gets eaten to ouput token now. i edit both the context length in lms and the endpoint in the setting menue
 

tacosnap123

Active Member
Mar 3, 2018
802
221
Sounds like you need to increase the context length in LMStudio, not change endpoint stuff in the game. If you're having those issues in the online version, that world is basically not supported there.
If you're running LMStudio check the console log there if it says something like "Needed x tokens, but only y is allowed"
funny test the world i wanted to try i did it online it said for simple math rouned up to 8k to start the game. in lms on my own i gave my self 30k( context length) more then triple what is needed but in local host thats not nearly enough i think i had to give my self 75k just get it to start and even then that was barly enough and slowed up my machine.
second note in the online version only chaing the singla pargrahps seeting my world my output token % is 26.3 but runing it local useing lms its ouput token % is 96% is there a setting i screwed up somewhere is it a moduel bug or lms bug (lmstudio-community/Mistral-Nemo-Instruct-2407-GGFUF) is the modeul i use
 
Last edited:

Serpream

Newbie
Jun 7, 2018
74
27
How intense could I abuse this dictionary?

For Example - I got custom stats and use that for basics, but use the world rules to establish when and how the stat should be applied and what for specifically. Can the dictionary handle that?

I am over 7100 tokens and would be fantastic to offload these definitions/rules for only when needed.
 

fierylion

Member
Game Developer
Jan 18, 2019
269
471
How intense could I abuse this dictionary?

For Example - I got custom stats and use that for basics, but use the world rules to establish when and how the stat should be applied and what for specifically. Can the dictionary handle that?

I am over 7100 tokens and would be fantastic to offload these definitions/rules for only when needed.
For now the dictionary is just a simple dictionary, you define one or more trigger phrases and if that phrase come up in the gametext or player action, the dictionary will inject the value of the phrase into the AI prompt (specifically after the location information).
 
  • Like
Reactions: Serpream

fierylion

Member
Game Developer
Jan 18, 2019
269
471
funny test the world i wanted to try i did it online it said for simple math rouned up to 8k to start the game. in lms on my own i gave my self 30k( context length) more then triple what is needed but in local host thats not nearly enough i think i had to give my self 75k just get it to start and even then that was barly enough and slowed up my machine.
second note in the online version only chaing the singla pargrahps seeting my world my output token % is 26.3 but runing it local useing lms its ouput token % is 96% is there a setting i screwed up somewhere is it a moduel bug or lms bug (lmstudio-community/Mistral-Nemo-Instruct-2407-GGFUF) is the modeul i use
Did you accidentally set maxoutputtokens in the game settings to a very high number?
 

tacosnap123

Active Member
Mar 3, 2018
802
221
Did you accidentally set maxoutputtokens in the game settings to a very high number?
I might habe i tried to keep it a couple thousnad unter the max context legneth minicing how you had it orginal nut just alot more tokens. But thst shouldnt be th issues as thats how i normsly played and jt was fine. If someone can send me verion 9, i can see how it runs there but if i can get version 10 again i can see how diffrent the start memoeru is from 10 to 11
 

Breedking

Newbie
Jul 7, 2024
54
35
Neato little game, been playing it for like, 3 days and set up my own AI (using TheBloke/Mistral-7B-Instruct-v0.2-GGUF mostly, cuz potato PC) since the online version was kinda janky last I used it.

Had an idea for a feature request though, based on my failed attempts at getting the AI to remember stuff:

Allow the System Prompts to be customized, so you can have a step be specifically "record the players equipment" instead of shoving it in with the narrator, and hoping he doesn't start recording everything too (tell him to record equipment, and he'll replace or ignore the stat system)

Maybe also some JavaScript control over it too, like checking the stat of the player, and if they don't have magic, don't ask the ai to consider magic at all, or if they DO have magic, have the AI tidy up a nice list of spells to cast, and add them as options!

Or make a summarizer step, and feed the summary of the current scene to the other steps like Options, Stats, Location, etc. so they only eat like 100 tokens, instead of eating 1000 tokens, only to spit out "Stamina -5", and a whole paragraph explaining why adrenaline may or may not cause health issues.

(maybe let players set the AI up with note-taking abilities themselves, since I think that was planned)

(also the ability to save and load System Prompt presets, so if you have a good set of instructions, but wanna keep experimenting, you don't have to record them outside the game)
You can add specific things to the Notes tab so the AI gets that as a hidden part of the prompt each time you submit something. No idea what the limits are, but I've gotten it to remember a paragraph of stuff consistently on the abliterated nemo I'm running.
 

AndonTheBest

New Member
Dec 16, 2021
12
4
You can add specific things to the Notes tab so the AI gets that as a hidden part of the prompt each time you submit something. No idea what the limits are, but I've gotten it to remember a paragraph of stuff consistently on the abliterated nemo I'm running.
I know of the notes tab, but I have a shitty PC, and the narrator is stupid, I can't really run a better model without each scene taking 5 minutes to generate, so the narrator ignores shit sometimes.

My suggestion was control over the actual generation steps, so you delete a step, like fully deleting the options step (I know 'DISABLE' works there too, just making an example)

Or make a new step that makes a summary, and feeds it to the other steps, which could make the other steps take less time, because they only get a summary of the events, not the full thing.

Or a step dedicated to tracking a specific thing, like equipment, spells, known locations..

A way to optimize what the model is fed, so the model can do it's job better, or faster, would be great for my potato PC.


.. and while I'm here, extra suggestion: more JavaScript stuff please, making the description of an entity or location vary based on stats or other factors would be great.

The narrator sees the player struggling and says YES SIR! and lets them win, when I WANT to lose! let me just fucking describe shit as "COMPLETELY INVINCIBLE" if the player has shitty stats.
 

tacosnap123

Active Member
Mar 3, 2018
802
221
Did you accidentally set maxoutputtokens in the game settings to a very high number?
ok so maby it was the output max tokens then what is a good blance to put in the enpoints. i currerntly have 50k in max memory and contentx length in lms. and for testing i set the end point tokens to 10k.
second. what a good set of setting to use to boost performance/ make it so it spits out the respones faster
 
Last edited:

fierylion

Member
Game Developer
Jan 18, 2019
269
471
ok so maby it was the output max tokens then what is a good blance to put in the enpoints. i currerntly have 50k in max memory and contentx length in lms. and for testing i set the end point tokens to 10k.
second.
maxoutputtokens should still stay at 1024 unless you want the AI to write longer responses, 1024 is enough or a highschool essay.

what a good set of setting to use to boost performance/ make it so it spits out the respones faster
The game settings can't affect your AI speed, either choose a smaller model (smaller models are faster), or choose a quantized version of your model
 

BlackNBlue

Newbie
Apr 24, 2023
23
12
fierylion : Do you plan to build a gameeditor?

Exsample:
Lets say you have a game ready world.
NPCs.
Enemies.
Items.

All is set up, then you just inject AI to bring everything it to life.
The AI does not need to know eveything, it just need to move the npcs with like 50 tokens each.
It only needs to know what the NPC experienced, like when, where & what event occoured.

You can also set a timer for the npc to forget some events, like woodcutting or going to toilet.
Or some details getting a memory loss timer, for the how does the person looked, that i meet on the street like 5 min ago.

Would be pretty realistic already.

This AI could be always in a rollback status, ready to entertain the player, based on the location, character and items.
You could also enable a conversation function, where nps discuss events when they trust each other. And so on.

It all depends on when and how the AI is needed.
 
Last edited:

Moriko

New Member
Sep 18, 2017
8
19
I just wanted to say, I've been messing around with AITavern style projects for a few years now, and your game Formamorph and Fox's Infinite Worlds are the two I've been most impressed by.

Best of luck
 
  • Heart
Reactions: fierylion

tacosnap123

Active Member
Mar 3, 2018
802
221
Use a bigger AI model, small models tend to go into this weird loop. For a quick fix, you can just rollback to previous page and submit the action again
maxoutputtokens should still stay at 1024 unless you want the AI to write longer responses, 1024 is enough or a highschool essay.


The game settings can't affect your AI speed, either choose a smaller model (smaller models are faster), or choose a quantized version of your model
So nothing in lms studio needs to change? Cuz i did my own world promt and was geting replys at a decent place but when i did the downlaod world i think it was called futa fantasy it took a good 5+ minets to gernerare a reapones. No clue if its cuz it has so many auto generated respones or if it is the moduel. And third since i followed the guide id i did need a smaller moduel what one would you reconmend
 
4.30 star(s) 12 Votes