Likely server crashed and is throwing an error page (HTML) instead of returning a proper JSON responsetried to register an account and got this Unexpected token '<', "<!DOCTYPE "... is not valid JSON, im using version 1.1.11 of the downloadable version with a local Ai on LLM.
ya i know the latest one i used is tacosnap12 as i finaly put my sily world up no clue how to add tags loltheres tacosnap and taconsnap
you prob forgot the password, want me to delete the account?
there was a power outage yaLikely server crashed and is throwing an error page (HTML) instead of returning a proper JSON response
Confirmed, server issue:
502 Bad Gateway
Unable to reach the origin service. The service may be down or it may not be responding to traffic from ***************
what's you PC specs?Does size have to do with anything? That one is 40GB, I tried runningYou must be registered to see the links, but every iteration took a lot of time and lagged my computer and it's only 12GB.
You must be registered to see the linksruns smoothly, and it's 7GB. Is that the size in disk or the size it'll occupy in the vram?
That's happens with AI. I guess (only guess!!!), that this happens when context memory fills and old messages getting cut at some point from memory, with your case AI got context that started with or ended with <special_38> without pointers where is whose message and as result AI recognised that it was continuosly repeating this special _38 as it's own messages and decided just to continue doing what it was doing. OR you simply had TEMP value too low for your running modelI have this bug thats recently started happening. It keeps repeating this over and over at some point after maybe 10 passages
Are there versions of the Angel Slayer that aren't max RPG? Having the AI decide player actions is pretty annoying, but I really liked the detail and how fast the response time was. Everything I searched was only showing the RPGMax versions though.You must be registered to see the linksmodel.
For some reason AngelSlayer has tremendous issues when generating choices, sometimes continuously generating text in the choice boxes and after generating a huge wall of text, the choices reduce to just 6 boxes that make little sense given the instructions. Other times it generates all sorts of extra weird words like |lim_"something something or other"|what's you PC specs?
And yeah, of course size matters. Just... Look at thos 8B-12B-22B in model names or on download sources in 'model size' part. For example - my PC have 8GB vram and 16GB ram and capable to partially smoothly run 8B (not a GB, but B, like, 8 billion parameters), and with a little more effort - 12B GGUF model, at 14B model it starts to genery too slow to be enjoyable and larger models makes my PC to really struggle while generating like 1 symbol per 5-10 seconds, which is VERY slow. GGUF models capable to run not only on your VRAM memory but also offloads some exceed usage onto your processor, so be careful not ot cook your PC when you try to run bigger models. If your PC is capable to smoothly run 12B model (mistral nemo instruct from your link IS 12B model) than if you really want some real horny advetures I can recommendYou must be registered to see the linksmodel. ALSO pay attention to model names. If they contain 'instruct' then those models were trained to follow exact instructions given, which often follows with... boring for a roleplay. RP in name of model means that this model is better suited for roleplays. Beside RP and INSTRUCT there also some more key words in model names (like abliterated or distilled, never download these ones - really poor quality for roleplays).
Aside from number of parameters, GGUF models have an QUANTISATIONS - q_3, q_4, q-5, q_5_s, q_5_m e.t.c. I, myself, prefer to run 12B with Q_5_M (or higher if model is smaller) since with lower quants models loose more data and have less quality.
Oh, and never forget, that contex length also eats resources, so try not to max it when you launch model, better set it to more... suited for your PC numbers of tokens.
AND YES, when you run model it fully loads onto your VRAM!!! So if your PC is not equipped with 40GB VRAM videocard or sum of your VRAM + RAM isn't more than 40GB then better do not try to run such big models.
Using in silly tavern I never had such issue with angel slayer. But I can guess what is going on! Different models need different settings! And I'm talking not only about those shady TOP-K TOP-P, nor it's about even TEMP settings! Those "<|im_end|><|im_start|>" telling us what is issue! In Silly Tavern you can change context templates and for different releases of models, if turned on, it automatically changes to needed. In this game we don't have this function. Here how it should look:For some reason AngelSlayer has tremendous issues when generating choices, sometimes continuously generating text in the choice boxes and after generating a huge wall of text, the choices reduce to just 6 boxes that make little sense given the instructions. Other times it generates all sorts of extra weird words like |lim_"something something or other"|
Edit: kept testing it for a bit and noticed it is very inconsistent with its generating speeds and "<|im_end|><|im_end|><|im_end|>" keeps appearing in the game text and choices.
Edit 2: Yeah, no. It keeps either taking a while to generate 2-3 paragraphs or instantly starts rolling out a huge wall of text with <|im_end|><|im_start|>user or <|im_start|>assistant sprinkled in and continues being very inconsistent in its generation. It's a shame since the text itself looks fine at first glance. I'm guessing that this model requires certain commands to be set in its own parameter language.
Tested with 1.11.1 and it kinda sorta maybe worked 25% of the time, but 1.20 was a flustercuck.
Honestly, I've found that only mistral nemo 2407 doesn't go completely ballistic with generation.
Thanks for the explanation. I should probably try using SillyTavern as well and see how that pans out.Using in silly tavern I never had such issue with angel slayer. But I can guess what is going on! Different models need different settings! And I'm talking not only about those shady TOP-K TOP-P, nor it's about even TEMP settings! Those "<|im_end|><|im_start|>" telling us what is issue! In Silly Tavern you can change context templates and for different releases of models, if turned on, it automatically changes to needed. In this game we don't have this function. Here how it should look:
<|im_start|>system
{{#if system}}{{system}}
{{/if}}{{#if wiBefore}}{{wiBefore}}
{{/if}}{{#if description}}{{description}}
{{/if}}{{#if personality}}{{char}}'s personality: {{personality}}
{{/if}}{{#if scenario}}Scenario: {{scenario}}
{{/if}}{{#if wiAfter}}{{wiAfter}}
{{/if}}{{#if persona}}{{persona}}
{{/if}}{{trim}}<|im_end|>
For Mistral it sligtly differs:
[SYSTEM_PROMPT] {{#if system}}{{system}}
{{/if}}{{#if wiBefore}}{{wiBefore}}
{{/if}}{{#if description}}{{description}}
{{/if}}{{#if personality}}{{personality}}
{{/if}}{{#if scenario}}{{scenario}}
{{/if}}{{#if wiAfter}}{{wiAfter}}
{{/if}}{{#if persona}}{{persona}}
{{/if}}{{trim}}[/SYSTEM_PROMPT]
And all models with equal number of parameters generate response with same pace, sometimes a LITTLE slower or faster. It's just, for example, if AI uses those macroses, like when it uses start of paragraph with name of {{char}} - the name generates at first until it fully finshed, then it dissapears! It's not model issue - it's like using some symbols, that you need to type few different letters before whatever Word Pad you use will recognise it and replace it with corresponding symbol - slowdown simply visual, if you look at console of running AI you will see that tokens generate at same pace.
As summary I can say: the fact that you see those symbols is simply because game, it seems, isn't set to play with CHAT_ML template. That disappointing. Because instruct models IS lacking in a diversity of possible responses due slope tokens (tokens, that AI tend to use more often, than other tokens). Instruct models is built to complete tasks as precisely as possible for them. I really like AngelSlayer model (if more correctly - I like the combination of models used, it gave very good expierence model and it responds like if actually manages to analyse situation, instead of simply... continuing already exsisting text.
This has to do with how you've entered the code.can someone give me a hand with dynamic stat code? i have a fertility stat that is supposes to be the base fertility value plus 25% of health, it is settled well at the start of the game but after every prompt the calculation increases again topping the stat in two prompts even tough helalt is going down instead of up.
You don't have permission to view the spoiler content. Log in or register now.
i have no idea if the problem is in the code, the ai or the game doing something weird.
This has to do with how you've entered the code.
Let's look at an example.
You don't have permission to view the spoiler content. Log in or register now.
As you can see when doing the calculations, fertility initially increases due to health, but as fertility increases, it causes fertility to increase again in the next round of calculations, repeating in a cycle until fertility is maximized.
I propose the following solutions to you:
OrYou don't have permission to view the spoiler content. Log in or register now.
I sincerely believe that what you are looking for is the second option and for this you do not have to take the current fertility value but the base fertility value that you define yourself.You don't have permission to view the spoiler content. Log in or register now.
I honestly don't know if traits are supported but it definitely has to be something like this.thanks, would the second option work with a trait that changes fertility, one of the posible traits i made is fertile witch doubles the initial base value of the stat.
no, it does not work with traits, i suspect that it can be included with an if/else code, but i have no idea how to code that, i can somewhat read code but i have no idea of how to write it.
i have added this to the code
but as i said i have no idea how to write it, i suspect the trait part is wrongly written but have no ideaYou don't have permission to view the spoiler content. Log in or register now.
nop, when adding that to the code it breaks and goes to the default setting, so or there is no way to link dynamic code to traits yet or somewhere in that part of the code is missing something so ill wait for fierylion to comfirm if traits can be linkedI honestly don't know if traits are supported but it definitely has to be something like this.
You don't have permission to view the spoiler content. Log in or register now.
Honestly, it's possible that we don't even have access to the traits in the dynamic code, and if we do, I have no idea how the trait object's properties are structured.
Probably the only one who can tell you for sure is fierylion.