That's because these models are what they call LLM's or large language models they load entirely into the gpu, the gpu is extreamly good at proccessing llm's and even designed to do just that with newer graphics cards, cpu's are extreamly slow and normal ram (ddr) is super slow so a gpu is often how its done. gddr is super fast and its placed around the gpu chip so even travel time is really low provinding great latency to performance leads to quick replys or photo processing on stable diffusion.i'm not a coder, but wtf does it need the gpu so much?
i think the waiting means its done loading, try chatting or you can open task manager and go to performance tab, click on the gpu and see if the vram has filled up, once its loaded into ram its usually ready to go.It doesn't matter which model I try, whenever I press Start AI it just says 'Waiting....' in yellow forever. Anyone know how to fix that? Thanks in advance.
start game, settings, ai tab, mode local ai, download a model, select that model in settings and click start ai.How do you enable offline mode? Couldn't find any settings for it in the app or config files, is there something else I need?
thanks bro, much appreciate a real answer. i half expected to get a lot of trolls, but nobody did! lolThat's because these models are what they call LLM's or large language models they load entirely into the gpu, the gpu is extreamly good at proccessing llm's and even designed to do just that with newer graphics cards, cpu's are extreamly slow and normal ram (ddr) is super slow so a gpu is often how its done. gddr is super fast and its placed around the gpu chip so even travel time is really low provinding great latency to performance leads to quick replys or photo processing on stable diffusion.
a lot of these models that are uncencered will work just like gpt 4 but well with out the cencer so you could ask it all kinds of real world questions on stuff and get a reply, we might be using it for sexy chat time but they really are powerful tools, they can code for you too.
If you wanted to run models for other then chatting then i would encurage you to try oobabooga
No dice. Sending a message just gives me "LOCAL AI ERROR: Cannot connect to Destination host."i think the waiting means its done loading, try chatting or you can open task manager and go to performance tab, click on the gpu and see if the vram has filled up, once its loaded into ram its usually ready to go.
Well, thats just the images. You can also use Faraday to make sexual rp scenarios. The way you make them heavily depends on the model you use, but I kinda formed a more optimized version of W++ that gave semi-better results than normal W++, alichat, and the third one that just lists everything. It cant generate images, but when you make it right, it can sure as hell do text-based ERP.Darn this only use GGUF models? I was curious to see how it would use local models + chat. Will there be any update for safetensors model?
I guess I'll go back to local StableDiffusion / ComfyUI
View attachment 3292234
I have the same problem. I checked the task manager and saw that when I tried to start the GPT model, it included three more tasks, Kobolt something there, something in the cmd line and one more. And when these processes are turned on, after a couple of minutes the model is loaded and the word “active” appears instead of waiting. But it also happens that when trying to launch the model, these three applications do not start, which is why the wait for the model to launch can take forever (I checked, nothing loaded in eight hours of sleep). I have not yet figured out how and why these applications stop opening, but I decided to share my observations, maybe someone more experienced in these matters can suggest a solution. (Usually, if I re-unzip the game into a new folder once, I can connect the model, but after rebooting everything breaks again. I will continue my research) p.s. I don’t speak English well, but I hope you understand me.No dice. Sending a message just gives me "LOCAL AI ERROR: Cannot connect to Destination host."