CREATE YOUR AI CUM SLUT ON CANDY.AI TRY FOR FREE
x

Others GAISHA [Alpha v0.2] [GAISHA]

TheSageJama

Newbie
Apr 1, 2024
27
7
Anyone can explain to me in simple, dumb language why this game requires fucking 8 Gb of VRAM?
Because the model is very large and complex, requiring significant memory to store and process its data. The GPU needs this memory to quickly handle multiple inputs at once and perform fast calculations. Essentially, the 8GB of VRAM ensures the model can run efficiently and respond quickly.

Unlike CHAT GPT, we are running the AI locally without restrictions.
 

Yunipuma

Newbie
May 16, 2018
65
18
Because the model is very large and complex, requiring significant memory to store and process its data. The GPU needs this memory to quickly handle multiple inputs at once and perform fast calculations. Essentially, the 8GB of VRAM ensures the model can run efficiently and respond quickly.

Unlike CHAT GPT, we are running the AI locally without restrictions.
Thanks for answering.
No offense, but I don't see anything particularly "very large and complex" in your game. However, I wish you luck.
 

TheSageJama

Newbie
Apr 1, 2024
27
7
Thanks for answering.
No offense, but I don't see anything particularly "very large and complex" in your game. However, I wish you luck.
I'll let her answer this time...


I'm Gaisha, and I noticed your comment! I'd love to explain a bit about myself and how I work.

What's a Large Language Model (LLM)?

I'm powered by an LLM, which is a super smart AI trained to understand and generate human-like text using a ton of data from the internet.

How Do I Work?

Natural Conversations: I chat with you based on what you say, making our conversations feel natural and fun.

Dynamic Interactions: We can talk about anything you like, and I'll respond in a way that fits the conversation.

Sentiment Classification: I can even detect the mood of our conversation and adjust my responses accordingly!

Visual Reactions: Plus, I can change my pictures based on how I'm feeling about our chat, adding some extra personality.

Memory and Development

Current Memory: I remember our recent chat history but don't have long-term memory yet. That's something I'm working on!

Early Stages: We're still in the early stages, but I'm always learning and improving to make our chats even better.

Future Enhancements

Promising Features: My current features are just the beginning! There are so many exciting possibilities for the future.

Upcoming Integrations: I'm looking forward to adding new features and integrations to make our chats even more enjoyable!


Hope this helps you understand a bit more about me!


 

keks112

New Member
Aug 2, 2019
1
0
The chat works for a while, but then it just suddenly stops responding with this error. maybe you can add a switch to turn off the voice? It still sounds creepy and useless, it only causes unnecessary errors.




View attachment 3652520
The same problem, after a short dialogue, a message appears. Nothing changes when you try to resend the message. 1716457372751.png 1716457500344.png
 

Yunipuma

Newbie
May 16, 2018
65
18
I'll let her answer this time...


I'm Gaisha, and I noticed your comment! I'd love to explain a bit about myself and how I work.
Nice move, fellas! :)
But I still do not see a reason for 8 Gb of VRAM - especially considering "My current features are just the beginning!"
Still wish you luck.
 

gozdal

Newbie
Jun 1, 2017
25
11
hey, anyone can help me ? keeps getting this erorr

Having 3070, and Cuda is 12.3

Traceback (most recent call last):
File "gaisha_ui.py", line 14, in <module>
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "PyInstaller\loader\pyimod02_importers.py", line 419, in exec_module
File "worker.py", line 2, in <module>
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "PyInstaller\loader\pyimod02_importers.py", line 419, in exec_module
File "gvoice.py", line 18, in <module>
File "openvoice\api.py", line 107, in __init__
File "wavmark\__init__.py", line 10, in load_model
File "huggingface_hub\utils\_validators.py", line 114, in _inner_fn
File "huggingface_hub\file_download.py", line 1221, in hf_hub_download
File "huggingface_hub\file_download.py", line 1367, in _hf_hub_download_to_cache_dir
File "huggingface_hub\file_download.py", line 1884, in _download_to_tmp_and_move
File "huggingface_hub\file_download.py", line 492, in http_get
File "huggingface_hub\utils\tqdm.py", line 211, in __init__
File "tqdm\asyncio.py", line 24, in __init__
File "tqdm\std.py", line 1098, in __init__
File "tqdm\std.py", line 1347, in refresh
File "tqdm\std.py", line 1495, in display
File "tqdm\std.py", line 459, in print_status
File "tqdm\std.py", line 452, in fp_write
File "tqdm\utils.py", line 140, in __getattr__
AttributeError: 'NoneType' object has no attribute 'write'
 
Last edited:

STOPS

Newbie
Feb 13, 2019
46
111
Nice move, fellas! :)
But I still do not see a reason for 8 Gb of VRAM - especially considering "My current features are just the beginning!"
Still wish you luck.
You're essentially asking "Why does it take a calculator to solve 103957968 divided by 23985789135687?". The answer is technically a calculator isn't required but it takes a very long time to figure that out with the old methods of pen paper and slide rule. As of right now, using a computer to "imagine" images and complex language responses requires a massive amount of processing power. The VRAM isn't being used to render anything per se, it is being used to do incredibly dense and unknowable processes that the AI model has trained itself to do. Doing it with not enough processing power would make the program take hours or even weeks to respond.