[AI] Uncensored text generation via Oobabooga

Deleted member 1121028

Well-Known Member
Dec 28, 2018
1,716
3,292
So lewd allowed.

Recommended :
- Install where you have some free disk space because models are quite huge.
- Nvidia card with 8Go of VRAM.
- This to keep thing fresh.


and download/unzip one-click installer.
Click start_windows.bat.
Select your type of GPU, when asked to download a model select "Do not download a model".
Wait for it to finish completely (can take some time).
Once fully installed you should see this line :

475q6hB6Tc.jpg

Open your browser, type the URL and bookmark it.
(If you struggle with installation )

Model VRAM usage :
Bigger the better, but choose a model that suits your card accordingly.
For exemple, if you have a 8Go VRAM card, choose a 7 billions model (7B).
(Watch to reduce VRAM usage at the cost of time it takes to generate text).

image-49.jpg

Installing a model :
In the model tab, go to the huggingface model page and paste the model name and click download.
Once downloaded, refrest models list and select it.
Model can take a long time to load (30B one take around 10min on a 3090)

brave_AJAFUOnOGm.jpg

In the same tab change wbits & group values to the model you use :
EDIT: If your model & Oobabooga/text-generation-webui are up-to-date and contain a quantize_config.json file, you may skip this.
brave_OfWp04huKe.jpg

Uncensored models (4bits quantized version) :

Vicuna
(4 wbits, 128 groupsize)
(4 wbits, 128 groupsize)
(4 wbits, None groupsize)

WizardLM
(4 wbits, 128 groupsize)
(4 wbits, None groupsize)

SuperCOT/StoryTelling
(4 wbits, None groupsize)

Pygmalion (you need to )
(4 wbits, 128 groupsize)
(4 wbits, 128 groupsize)
 
Last edited:

Saki_Sliz

Well-Known Member
May 3, 2018
1,403
995
This is awesome! def going to have to explore this now that it looks like a proper community is forming.
 

Deleted member 1121028

Well-Known Member
Dec 28, 2018
1,716
3,292
Something like one day after I posted, Oobabooga/text-generation-webui got updated. If the model folder contain a quantize_config.json file, you shouldn't have to set GPTQ (wbits/groupsize) parameters anymore (it's automatic).

This is awesome! def going to have to explore this now that it looks like a proper community is forming.
For the moment I'm having quite good results from my few hours testing (not that much sadly, hope I have more time this week end). I think a huge part of the community is trying to make kinky sex bots, which I'm not really interested in (no shame tho). I think of using it more in a typical VN context, as it should provide some non negligible help for not-english-native speaker.

I was thinking of making a small Renpy prototype, say around 500 renders, with at least ~90% of text wrote or recontextualized via AI and post the results here, so people have an insight of what to expect. If I find some time tho lol.
 
Last edited:
  • Like
Reactions: neurodyne

Saki_Sliz

Well-Known Member
May 3, 2018
1,403
995
Yeah, I watched more videos, many want rp bots... but I want a sort of respectable robot butler character :D

I've been falling behind on AI research, I can see that many of the LLM are able to use different styles of text to describe acitons, I wonder if I could get the AI to use API code so that it could control characters in a game. I know there was a research paper or article about ai character setting up a valentines party, but that was a rather large LLM that took lot of time to run, and a lot of the other RP AI assets I see on unit take some time to process. I love learning more about the smaller 6B and 7B parameter models so that they can run on more common hardware and quicker... as for the issue with token context, I think thats an architectural mis design that comes form the early LLM design, once the AI can be given a method to export text, or simplify it, and then save it on the hard drive, token limit won't be the issue anymore... but I don't know how many people realize that the work around is needed yet. my own AI was originally going to do something similar, but I wasn't working with LLM, and maybe figuring out a simpler LLM at least for contextual comprehension would really help my own AI projects.
 

Deleted member 1121028

Well-Known Member
Dec 28, 2018
1,716
3,292
To be real, nothing wrong with falling behind AI research, there is a new paper almost every week, if not days. Sometime I can "get" them, sometime they are way out of my league. I would add, hysteria around it doesn't help, either doomer or sycophante. Fuck that parasitic noise tbh.

I think the problem with RP bots is the consensual approach (as "reach fastest consensus within context"), lacking real negative weight (even using various tricks/formating) and mostly limited token context (there is a censored 4bits model with 65k+ token context o_O, almost a small novel). But I may not have tested enough, training your may significantly change the ouput (we need a candidate to scrap and parse text of the 10000 VN porn game of this site).

I think 4 bits quantization is here to stay for global use (of course outside major breakdown). I've seen Japaneses using AI into RPGM via api (it's gibberish but rather intersting). I'm not really intersed in this but you should look at how or manage the api.

I can't verify right now but Oobabooga/text-generation-webui save all logs in json format, check your folders. It's a bit off topic (not that much) you but you should definitely test with Blender if you have time (it blow my mind lol).
 

Saki_Sliz

Well-Known Member
May 3, 2018
1,403
995
I've been using PygmalionAI. Very good at keeping things going and generating ideas... not so good at staying consistent or sane XD but its fun to play with.
 

Furry_Desirer

Newbie
Game Developer
Jun 30, 2018
30
56
Well... i'm done downloading everything with one-click installer but when I started again and it said "Conda environment is empty". How to I fixed that?
 

Deleted member 1121028

Well-Known Member
Dec 28, 2018
1,716
3,292
So here my testing after a week, it's bit of a rambling but I'm dead tired.
Daz/Renpy usual storytelling setup, around ~150 renders (4 differents scenes with different chars)

Models I tested (no LoRAs) :

brave_UfyApESAVV.jpg

First it's a bit chaotic to find a model that suits you.
Model are often forked, model card (descriptions) not updated or hidden somewhere else.
Sometime you don't know which LLM trained with what other LMMs, at what ratio and what not.
There is not much metrics on global performance (it's annoying).
For exemple you could see a 13B model performings slightly better than a 30B one (so don't stress the model you use is lower):

brave_ZBJMTbMav7.jpg

Long story but short :
The best model I could find for lewd visual novels was (mostly using instruct mode).
It should fit a 8Go VRAM card. Give as much context as you can (!). I use NovelAI-Storywriter as generation parameters preset. Model is very good if you're like me sometime lagging behind your renders.

The + :
Above everything I tested.
Long descriptions despite sometimes low context.
Very great to enrich the text in one click.
I learned more vocabulary in a week than casualy browsing the internet.

The - :
Dialogue generally.
Even chit-chat, I feel it's bad or require more time to ajust context than just write the dialogue itself (sadge).


I've been using PygmalionAI. Very good at keeping things going and generating ideas... not so good at staying consistent or sane XD but its fun to play with.
You test it with TavernAI? I think that's where that model shines o/

Well... i'm done downloading everything with one-click installer but when I started again and it said "Conda environment is empty". How to I fixed that?
Something obviously went wrong. Not sure I can help you.
Delete everything. As matter of reducing problems (assuming you're using win10/11)

Use the most simple path ever (D:\TextGenAi\Oobabooga, for ex)
no special chars, no space, no shaningans. And run the installer as admin.
 
Last edited:

Deleted member 1121028

Well-Known Member
Dec 28, 2018
1,716
3,292
Addentum:

If you want to run (that should generate code, I won't test it lmao but model knows Renpy and focused on python). Comment out AutoGPTQ_loader.py like that (as a quick fix) :

mEiuVpDKdA.jpg
 
Last edited:

Furry_Desirer

Newbie
Game Developer
Jun 30, 2018
30
56
Something obviously went wrong. Not sure I can help you.
Delete everything. As matter of reducing problems (assuming you're using win10/11)

Use the most simple path ever (D:\TextGenAi\Oobabooga, for ex)
no special chars, no space, no shaningans. And run the installer as admin.


It still doesn't work but thanks anyway. I don't know if someone installs it successfully and uploaded the whole file somewhere
 

Deleted member 1121028

Well-Known Member
Dec 28, 2018
1,716
3,292
So finally find closure on that. Finished a small prototype but too trashy to share (even sticking to 1080, speed running the whole thing on a 3090 but still really lacked time to make something decent). Here a bit of stupid rambling about text generation with open source AI & VN :

Went further trying to exploit 8k token model context ( , more context should provide cleaner results) but overall results were worst (!). I feel people pushing model numbers on dataset without any kind of user utilisations. The jack-of-all trade AI that know everything is clearly detrimental to the whole (but give spicy numbers).

As for image generation, there is not much intelligence in artificial intelligence, it's fully mimicking grammar structure on what it's been trained. Anthropomorphic feelings will fill the void o/. It was expected somehow, but I needed to see it lmao. (in before sentient AI-agi take the world :illuminati:)
 

Deleted member 2282952

Developing I SCREAM
Game Developer
May 1, 2020
416
868
I have also extensively tested it, 24 GB VRAM is apparently not enough, and with a cap that prevents fallback, the processing time is just impossibly long.

Personality prompting is kinda good, but can't really tell because there is a significant limit on the input size it can process.

There is potential, but unusable right now.

EDIT: the 30B model, didn't see your 13b message - will check it out
 
Last edited:

Deleted member 2282952

Developing I SCREAM
Game Developer
May 1, 2020
416
868
Sooooo, after spending more time on it, this is the quickest and most functional model for me:



The 30B+ models are just nuts. Take way too long, even on powerful systems.

But if you wanna go wild and get 1 response every hour unless you run a crypto mining farm:

 

Deleted member 1121028

Well-Known Member
Dec 28, 2018
1,716
3,292
I have also extensively tested it, 24 GB VRAM is apparently not enough, and with a cap that prevents fallback, the processing time is just impossibly long.

Personality prompting is kinda good, but can't really tell because there is a significant limit on the input size it can process.

There is potential, but unusable right now.

EDIT: the 30B model, didn't see your 13b message - will check it out
Yeah to use 30/33B-8K model on 24Go VRAM, you need to limit it to 4k context (max_seq_len to 4096 instead of 8192, and compress_pos_emb to 2 instead of 4). I found ExLlama to be magnitude faster than AutoGPTQ. If you get 1 hour response something is going wrong (even on 30/33B model). Do you see your CUDA cores being used?

I do like Chrono-Hermes model 'cause it has been trained to make long descriptive sentence, not so bad even in low-medium context (that what I was looking for). I use it in insctruct mode, so not "chat" so to say.

I found adding too much context being counter productive (at least right now). Higher hallucinations rate, grammatical structure crumbling... It's even weirder when you reach full 8k tokens context as it seems to bleed almost raw output from dataset (?). I guess it's still highly experimental but I was curious.

I think right now there is two majors problems (I may be completely wrong it's just ramblings). LLM merged on another LLM merged on another LLM, and so on... To make jack-of-all trade model instead of focusing on one task. And poor quality of dataset (main problem imho), as most open source model are trained on public available ones.

There also a feeling of pushing bigger and bigger numbers (like scaling is all the rage, Amazon and Nvidia must have a smile lol) - that looks good on paper and maybe attract fundings that way - but with no other purpose of being bigger.

I guess is too early to say anything and it's definitely had potential - just a bit bummed that there is very little intelligence in the artificial intelligence, at least right now. I wanted SkyNet, I got google advanced search :KEK:
 

Deleted member 2282952

Developing I SCREAM
Game Developer
May 1, 2020
416
868
Alrighty, after playing around with settings and models, the 13B Vicuna model is amazing, but SLIGHTLY bad when it comes to writing dialogue, because it would just randomly ignore very specific context and just go and do its own thing.

This has been my favorite story it generated. It can write about anything, no matter how messed up the request, so fun (I will delete if this stuff is not allowed):

116254ee-5249-4adc-9cde-5bb1c1d580e3.jpg

I do like Chrono-Hermes model 'cause it has been trained to make long descriptive sentence, not so bad even in low-medium context (that what I was looking for). I use it in insctruct mode, so not "chat" so to say.
Too moralistic for my tastes.

I found adding too much context being counter productive (at least right now). Higher hallucinations rate, grammatical structure crumbling... It's even weirder when you reach full 8k tokens context as it seems to bleed almost raw output from dataset (?). I guess it's still highly experimental but I was curious.
Unfortunately, yes. I learned that ChatGPT is pretty good in helping you prompt the Vicuna model better more in CompSci terms (bunch of requests in IF statmenets)

I think right now there is two majors problems (I may be completely wrong it's just ramblings). LLM merged on another LLM merged on another LLM, and so on... To make jack-of-all trade model instead of focusing on one task. And poor quality of dataset (main problem imho), as most open source model are trained on public available ones.
There is so much info I am not even bothering with how they are trained, since in my understanding, it is a pretty boring exercise of labeling - boooring

There also a feeling of pushing bigger and bigger numbers (like scaling is all the rage, Amazon and Nvidia must have a smile lol) - that looks good on paper and maybe attract fundings that way - but with no other purpose of being bigger.
It's like man with small dicks buying huge cars, you spend more money, but the part you care about still doesn't work ... unless you can invest 10k for an enlargement surgery (very complicated metaphor).

Though, if I would be loaded, I would get 3 SLI RTX 4090 and be the king of the universe.

I guess is too early to say anything and it's definitely had potential - just a bit bummed that there is very little intelligence in the artificial intelligence, at least right now. I wanted SkyNet, I got google advanced search :KEK:
Better contextualization and optimization - I think that the mainstream future of AI are generalized moderated web services and open source local applications that are completely unrestricted BUT limited by hardware.

I think 'intelligence' is the wrong word in AI - the current iteration is an ability to perform a singular task with an average result being better than human, or at least decreasing the total time required to complete the task with less effort and less requirements for original skills. I kinda phrased it in a complicated way, but this is they way I see it.