Unity GPTits [v0.4.10] [MultisekaiStudio]

3.30 star(s) 6 Votes
Sep 10, 2017
121
174
Just checked moemate out, I think this site's underappreciation of moemate is fairly standard considering how poor the performance is
View attachment 3180912
View attachment 3180914
Yeah, to get more decent AI you have to pay usually. My 4090 would struggle with using a 32B model. And 24B models for LLM make my card scream still and the response is slow. Setting up Oogabooga was not simple either. 16B model is bare minimum to have okish chats. Issue is the memory can still be bad on some models. All depends. Plus a lot depends on how well the prompt and character card is set up.
 

pichkas

Newbie
May 13, 2021
47
72
Yeah, to get more decent AI you have to pay usually. My 4090 would struggle with using a 32B model. And 24B models for LLM make my card scream still and the response is slow. Setting up Oogabooga was not simple either. 16B model is bare minimum to have okish chats. Issue is the memory can still be bad on some models. All depends. Plus a lot depends on how well the prompt and character card is set up.

what're you using to load the 32b model? like, llama.cpp or exllama or, god forbid raw transformers?
 

Caulino

Member
Game Developer
Jan 4, 2018
134
237
Sad to see that the Mia And Lisa "character" was removed in since the last I played. They were my favorite ones to chat with.
Curious as to why this was done?
(Unless I'm just completely missing something)
OpenAI was able to separate one from the other very well. Now that OpenAI has broken down, it's not possible to use it fully for sexual content.

Thanks. Work Ok. But, how i use my loras? Model? How i promp it?
Click on the photo icon below a message to generate a contextual image or create personalized buttons, the character Mia has some examples. You can also improve the quality in the settings
image_2023-12-17_104113911.png
 

thebot

New Member
Nov 14, 2020
4
1
First time playing the game in about 2-3 months and I'm trying to switch characters but when I do everything just appears blank. like there is no description, name or images and I wanna know if I'm doing anything wrong. I go to the character menu and select |Open| then I click on one of the files in the characters section but nothing changes and all the sections are blank
 
Last edited:
  • Like
Reactions: tristfh

Ilaem

Member
Dec 27, 2017
121
61
Honestly, it seems much more complicated to me than previous version...
even creating new characters is more difficult...
loading is slow, when you open a character it takes many seconds or sometimes it just stays empty.
Is it possible to add a slot for the description of your character separate from the one the AI has to play?
Where the slot for deciding what format the response should be in?
 

RalfHamster

New Member
Sep 1, 2018
1
1
I don't know if anyone has already written this.
But wouldn't it make more sense to use a NovelAI API? It contains everything that is necessary for GPTits. Image generation and text generation in one.
I wouldn't want to use ChatGPT either, because the use of NSFW things can lead to blocking under certain circumstances.
 
  • Like
Reactions: Valamyr

Melapela34

Newbie
Oct 1, 2019
27
12
I think the installation guide is missing alot. Only downloading the files on this page and the recommended model, I dont see where to look for the "OFFLINE AI" folder or the "GGML.exe". Please, any complete guide on how to use this?
 
Sep 10, 2017
121
174
what're you using to load the 32b model? like, llama.cpp or exllama or, god forbid raw transformers?
I used OogaBooga WEBUI to load them, and then ported that into Silly Tavern at the time. I used mostly llama type models, but a few other. It was a few months ago when I dabbled in it. I gave up after seeing how slow the responses were for anything past a 16B model. All models I used were uncensored and some even trained specifically for RP. So they had pretty strong responses for the few times I'd wait for them to respond. Sadly it'd brick my video card from doing much else. For 30+B context model you need 32GB VRAM to have decent response. 24GB is still good enough to have them output however. To run a 32B model you could probably buy a 6K rig, with 3 AMD 7900XTX's with a huge 72GB of VRAM to have strong AI machine. Heck you could maybe pull off a 70B context model without too much trouble. When you get into the 172B context models you will need either a huge bank of 4090/7900 like a mining machine to push out responses with speed/accuracy. But by then. Just rent that shit out to people and get paid for it. But I don't know many who have 40-50K to drop on a server wide AI machine.
 
Last edited:

amir_2222

Newbie
May 14, 2022
56
22
Hello guys, I'm new to these things and I have the stable diffusion program, I wanted to see if you could explain to me how to use GPTits from the beginning.
sorry for my google translate
 

WtfwinPC

Member
Aug 2, 2017
107
95
how come there isn't a offline ai folder?

also doesn't work with oobabooga, unable to connect to api but sillytavern works fine.
 
Last edited:

Valamyr

Member
Oct 7, 2020
254
206
Sad that we cannot talk unrestricted to the AI anymore. If anyone is curious Perchance AI is pretty much a good one to try if you are searching for what this game wanted to offer.
It does make pictures and text descriptions, and tolerates some degree of perversity in those descriptions, but ultimately it seems fairly limited. Gptits before the DAN crackdown might not have offered much visually but it was was insane when it came to the roleplaying. I tested it's boundaries for fun and there were none, you could have a boy rape, kill then feed his baby sister to her dog if you wanted to! I don't know if that kind of freedom will be allowed again for any length of time moving forwards, they are all cracking down hard on anything that could make AI look bad.
 

pichkas

Newbie
May 13, 2021
47
72
I used OogaBooga WEBUI to load them, and then ported that into Silly Tavern at the time. I used mostly llama type models, but a few other. It was a few months ago when I dabbled in it. I gave up after seeing how slow the responses were for anything past a 16B model. All models I used were uncensored and some even trained specifically for RP. So they had pretty strong responses for the few times I'd wait for them to respond. Sadly it'd brick my video card from doing much else. For 30+B context model you need 32GB VRAM to have decent response. 24GB is still good enough to have them output however. To run a 32B model you could probably buy a 6K rig, with 3 AMD 7900XTX's with a huge 72GB of VRAM to have strong AI machine. Heck you could maybe pull off a 70B context model without too much trouble. When you get into the 172B context models you will need either a huge bank of 4090/7900 like a mining machine to push out responses with speed/accuracy. But by then. Just rent that shit out to people and get paid for it. But I don't know many who have 40-50K to drop on a server wide AI machine.
no...no thats not true at all. If you've been away for several months then you're far behind on new developments. That, and because you dont even know what I mean when I ask what you used to load the model with examples tells me you really dont know as much as you think.

Anybody reading - do not buy AMD for AI inferrence. Christ. AI right now relies on Cuda - which is Nvidia only software. Getting AI tro run on AMD hardware is an exercise in self flagellation. Secondly, thanks to exllama2 - a method for loading models which ooba supports - 30b is easily able to be ran with a 24gb card with decent context window.
 

reiner12a

Newbie
Jul 19, 2018
26
30
im so far from ai things. how can i play this in most ez way?
i did this thing work with horde. but answering is so long
 

Mrmyguy

Newbie
Nov 26, 2019
21
6
anyone know how to get dynamic scenario working? keep getting "unable to connect, please check URL"
 
Sep 10, 2017
121
174
no...no thats not true at all. If you've been away for several months then you're far behind on new developments. That, and because you dont even know what I mean when I ask what you used to load the model with examples tells me you really dont know as much as you think.

Anybody reading - do not buy AMD for AI inferrence. Christ. AI right now relies on Cuda - which is Nvidia only software. Getting AI tro run on AMD hardware is an exercise in self flagellation. Secondly, thanks to exllama2 - a method for loading models which ooba supports - 30b is easily able to be ran with a 24gb card with decent context window.
Fair. I have limited knowledge on the subject. But I learned some stuff from your post. Thank you. I started a conversation with you about this stuff in DM. If you'd be willing to talk a bit more about it. You seem to know a lot more then me. This topic interests me.
 
3.30 star(s) 6 Votes