Create and Fuck your AI Cum Slut –70% OFF
x

Are porn games collapsing down to ruin?

tanstaafl

Engaged Member
Oct 29, 2018
2,263
2,856
311
I mean blaming capitalism instead of AI and NTR is a new take on the downfall of porn games. :cool:
I could write an entire thesis on the fact that AI is here, it's staying, it's going to cause a paradigm shift in society, government, and business and there would still be people who haven't pulled their heads out of the sand and worked on ways to adapt instead of defeat it.
 

MissCougar

Active Member
Feb 20, 2025
865
3,665
262
I could write an entire thesis on the fact that AI is here, it's staying, it's going to cause a paradigm shift in society, government, and business and there would still be people who haven't pulled their heads out of the sand and worked on ways to adapt instead of defeat it.
I agree. I'm on a business side and it has been abused by everyone everywhere. And it is still in its infancy. It will be amazing at the rate it's going and I think there are only a couple factors that could stop it.

1: cost to use it skyrockets beyond what people can afford- it's already stupid expensive for enterprise licensing for people or rate limited

2: it does something bad against powerful people who are very serious and it is unplugged/limited/banned
 

tanstaafl

Engaged Member
Oct 29, 2018
2,263
2,856
311
I agree. I'm on a business side and it has been abused by everyone everywhere. And it is still in its infancy. It will be amazing at the rate it's going and I think there are only a couple factors that could stop it.

1: cost to use it skyrockets beyond what people can afford- it's already stupid expensive for enterprise licensing for people or rate limited

2: it does something bad against powerful people who are very serious and it is unplugged/limited/banned
I'm on the business side as well. In fact, my company just held a seminar on the implementation of RAG (retrieval augmented generation) with LLMs, a method of alleviating hallucination in LLMs by allowing external calls to data and APIs. My company (which creates and maintains an LMS) has implemented Kendra LLM in a big way and it is NOT cheap.

To address your number 1, it seems there is going to be a bell curve with the cost like there is with other top technologies, like TVs were cheap, then they were stupid expensive as they evolved, and now they're stupid cheap again. This pattern seems to be happening again with LLMs due to several factors, most importantly, competition.

As for number 2, that's not going to happen. Simple as that.
 

Insomnimaniac Games

Degenerate Handholder
Game Developer
May 25, 2017
5,860
11,058
921
I could write an entire thesis on the fact that AI is here, it's staying, it's going to cause a paradigm shift in society
I very much hope I'm wrong, but I see this going the way of the calculator. A bunch of people who can't do the most basic of tasks on their own. And when you can't do the basic, you can't extrapolate into the advanced.
You don't have permission to view the spoiler content. Log in or register now.
Note that these are my beliefs. They're not going to change. I'm not trying to start an argument, just get my thoughts out there.
 

MissCougar

Active Member
Feb 20, 2025
865
3,665
262
I'm on the business side as well. In fact, my company just held a seminar on the implementation of RAG (retrieval augmented generation) with LLMs, a method of alleviating hallucination in LLMs by allowing external calls to data and APIs. My company (which creates and maintains an LMS) has implemented Kendra LLM in a big way and it is NOT cheap.

To address your number 1, it seems there is going to be a bell curve with the cost like there is with other top technologies, like TVs were cheap, then they were stupid expensive as they evolved, and now they're stupid cheap again. This pattern seems to be happening again with LLMs due to several factors, most importantly, competition.

As for number 2, that's not going to happen. Simple as that.
sounds like we do some similar stuff. :coffee:

with AI I'm not sure if the cost will go down until the hardware to run it also does on a mass scale. companies aim to integrate it into everything everywhere and the hallucination problem rears its head in interesting situations when people using it aren't doing their good due diligence as a human to verify stuff before they copy and paste it as their own work.
 
  • Like
Reactions: tanstaafl

tanstaafl

Engaged Member
Oct 29, 2018
2,263
2,856
311
I very much hope I'm wrong, but I see this going the way of the calculator. A bunch of people who can't do the most basic of tasks on their own. And when you can't do the basic, you can't extrapolate into the advanced.
You don't have permission to view the spoiler content. Log in or register now.
Note that these are my beliefs. They're not going to change. I'm not trying to start an argument, just get my thoughts out there.
You're looking at it wrong, as if AI were a product for consumption. It's not. AI is likely used in every site and every business you use invisibly. People don't "buy" AI in general. Are there products that people can use? Sure, comfyui, stable-diffusion, even DALL-E type generation, but that is a teeny tiny portion of AI. Your wishful thinking is just wishful thinking.
with AI I'm not sure if the cost will go down until the hardware to run it also does on a mass scale.
The hardware will get better and the hardware requirements will also lessen at a fixed rate until they meet somewhere close to (above or below possibly) the middle.
companies aim to integrate it into everything everywhere and the hallucination problem rears its head in interesting situations when people using it aren't doing their good due diligence as a human to verify stuff before they copy and paste it as their own work
Hallucination occurs only in LLMs because they don't natively have access to fact checking data and they are created to predict answers. The retrieval augmented generation I mentioned and parameters are being used to help alleviate this. Though it has a ways to go before it can be considered handled. You can even ask ChatGPT about its parameters in this regard and it will tell you that one of it's prompt based parameters (I.E. inserted into every prompt) is "If you don't know, don't say you know".

1747495854055.png
 

Insomnimaniac Games

Degenerate Handholder
Game Developer
May 25, 2017
5,860
11,058
921
You're looking at it wrong, as if AI were a product for consumption. It's not. AI is likely used in every site and every business you use invisibly. People don't "buy" AI in general. Are there products that people can use? Sure, comfyui, stable-diffusion, even DALL-E type generation, but that is a teeny tiny portion of AI. You're wishful thinking is just wishful thinking.
I'm a bit confused here. I thought I did the opposite of wishful thinking. :unsure:

Edit: I have not slept more than a handful of hours for about three days now, so this is probably making complete sense to everyone but me. :ROFLMAO:
 

MissCougar

Active Member
Feb 20, 2025
865
3,665
262
You're looking at it wrong, as if AI were a product for consumption. It's not. AI is likely used in every site and every business you use invisibly. People don't "buy" AI in general. Are there products that people can use? Sure, comfyui, stable-diffusion, even DALL-E type generation, but that is a teeny tiny portion of AI. You're wishful thinking is just wishful thinking.

The hardware will get better and the hardware requirements will also lessen at a fixed rate until they meet somewhere close to (above or below possibly) the middle.

Hallucination occurs only in LLMs because they don't natively have access to fact checking data and they are created to predict answers. The retrieval augmented generation I mentioned and parameters are being used to alleviate this. You can even ask ChatGPT about its parameters in this regard and it will tell you that one of it's prompt based parameters (I.E. inserted into every prompt) is "If you don't know, don't say you know".

View attachment 4848534
I guess it will depend on your model and training parameters and if you have some in house cooked system. Sounds like you may.

But hallucination can manifest in incorrect numbers, jumbled results that aren't quite summarized properly, and even social constructs coded into the system to give it some sort of bias or lean for or against certain qualifiers.

This can be more benign like if you plug it into a ticketing system and it scans past tickets to see if your specific problems were solved previously and gives you those options. But it can get worse when people plug it into an HRIS or BI data and it starts misfiring ever so slightly and people make bad decisions due to faulty data points they were led to believe is accurate.

Lots of stuff. I'm a proponent to AI, but I've also been needed to help people clean up a mess they made when using it. :coffee:
 

tanstaafl

Engaged Member
Oct 29, 2018
2,263
2,856
311
I guess it will depend on your model and training parameters and if you have some in house cooked system. Sounds like you may.
I have a local LLM, pay for subscriptions to two LLMs (Kendra, though company pays this one, and OpenAI), have several versions of stable diffusion type generation, etc. I have a lot.
But hallucination can manifest in incorrect numbers, jumbled results that aren't quite summarized properly, and even social constructs coded into the system to give it some sort of bias or lean for or against certain qualifiers.
Hallucination only happens in LLMs. Local small installations don't hallucinate. LLMs (large language models) don't have data to fact check with, which leads to the hallucination. The predictive nature of LLMs mixed with the way computers do math (you can look up the computer math courses online if you need clarification on that) lead to the jumbled numbers. AI just SUCKS at high level math, the last time I checked it was at less than 60% accuracy, lol.
This can be more benign like if you plug it into a ticketing system and it scans past tickets to see if your specific problems were solved previously and gives you those options. But it can get worse when people plug it into an HRIS or BI data and it starts misfiring ever so slightly and people make bad decisions due to faulty data points they were led to believe is accurate.
It's always benign due to the way computers work. There's no such thing as malignant hallucination. Wrong is just that, wrong.
 
Last edited:

tanstaafl

Engaged Member
Oct 29, 2018
2,263
2,856
311
AI just SUCKS at high level math, the last time I checked it was at less than 60% accuracy, lol.
To expand on this, I will say that people that use AI to do math are idiots in the first place, computers are great at math without using predictive, experimental thought machines.
 

anne O'nymous

I'm not grumpy, I'm just coded that way.
Modder
Donor
Respected User
Jun 10, 2017
12,901
21,379
1,026
I could write an entire thesis on the fact that AI is here, it's staying, it's going to cause a paradigm shift in society, government, and business and there would still be people who haven't pulled their heads out of the sand and worked on ways to adapt instead of defeat it.
What bother me is that I can't anymore hide behind the fact that it will not happen during my life... I'll be retired before it starts to become a significant threat for my job, yet now I'm not totally sure...
 
  • Like
Reactions: tanstaafl

anne O'nymous

I'm not grumpy, I'm just coded that way.
Modder
Donor
Respected User
Jun 10, 2017
12,901
21,379
1,026
You're looking at it wrong, as if AI were a product for consumption. It's not.
Yet they are too and will always be.

I agree with you that it's not their main use, but it's one of them. People changed, and the way they want to deal with technology evolved because of this.
Portable phones were created to be free to move in your home. Quickly they offered you the possibility to take them with you everywhere you goes. And now you've people seriously asking for an app that permit to speak orally to the person they want to talk to, instead of sending a text message... with their smartfuckingphone. They don't even remember what is the original purpose of the tool they use.
The same can be said for internet. Who would have thought, just 20 years ago, that it would become the center of your life, from TV to movies, passing by radio stations and gaming, giving you the weather, while being your favorite mall, and bringing your favorite food to your home? You can live in an empty room that you'll never leave and, as long as you've internet, have almost the exact same life than if you were in a fully purposed house; this can even include your job if it can be done remotely.

The instant some AIs were used as a consumption product, to test and promote their capabilities, it was too late. People will want them not just to stay that way, but also to be more and more that way, in parallel of whatever other use, way more serious and important, that they'll never really care about.
Honestly, a Pandora box have been opened here, and I'm not really sure what still left inside it.


Hallucination occurs only in LLMs because they don't natively have access to fact checking data and they are created to predict answers.
On this I agree, and , slowly turning "woke" (as the word is defined by Elmo and MAGAs) because it check facts and learn from this. And we are talking here about an AI that, it admit it itself, have been designed with an implied initial bias...
 
  • Like
Reactions: tanstaafl