Easter Sale - 70% OFF: Create and FUCK Your Own AI Girlfriend! TRY FOR FREE
x

We are getting a little too many AI games

Maxell5

New Member
Jun 2, 2023
14
8
I think the problem with AI is that, because its nature as a tool, you can do anything with it. But creatively speaking, there is always a small group that create new things and presents them as novelty. A large mayority follow.
Even with the infinite tool potential, the result is confined to human imagination. (which requires effort to exercise)

Because of this, some people will use it a certain novel way, a lot of other people will gravitate to use it that way, this will create a trend and the problem is AI now looks generic. Then the process will repeat itself and another "style" will become generic. And then repeats again.
So even if someone use it to make something novel that looks interesting, that moment will be fleeting and will start feeling "ai" afterwards. Because AI is becoming dangerously synonymous with generic, if its not already.

Im all for everyone having more and better tools to work, so that progress in technology needs to happen eventually.
But do to the nature of creative work having part of its core appeal as "being distinct from others", I think AI Art is transforming from being Artificial Art, to be Collective Art, the boring base everyone has access to but no one really "values".
 
Last edited:
  • Like
Reactions: Infamyxxx

XforU

Of Horingar
Game Developer
Nov 2, 2017
293
351
Because saying "it's more than Harem games" would be false.
It is more dramatic to state something false. I agree.
There are 10000 Ai games posted in the last hour. The site won't survive with this sickening amount of AI slop that gets posted. The end is near!... (I'll make a risky joke, I pray the mods have mercy on my soul...) 13% of the games cause 50% of the complaints on the forum.

(jokes aside, I thought I saw less harem games, at least I was right about the gay, 3d games and whatever else I mentioned)
 
  • Crown
Reactions: Count Morado

anne O'nymous

I'm not grumpy, I'm just coded that way.
Modder
Donor
Respected User
Jun 10, 2017
11,673
17,957
As you can see - 13.06% of games released since 03 Feb 2023 have AI CG tag.
And, as we see - 12.55% of games released/updated in the last 365 days have the AI CG tag.
(Again, these are of the games that are tagged with at least one of the following: AI CG, 2DCG, 3DCG)

Looks pretty steady so far. Time will tell.
Be noted that you don't count the "text based" and "real porn" tags, that would lower a bit those percent.


ALL AI CG: 777 results in the last 365 days
Well, the holly trinity can not be a bad omen.
 
  • Like
Reactions: Count Morado

RuneF

New Member
Apr 16, 2025
5
1
AI Art is transforming from being Artificial Art, from being Collective Art, the boring base everyone has access to but no one really "values".
Well said.
Tough to figure out what to do about it. Maybe we need an AI agent to play all those AI games for us.
 

c3p0

Conversation Conqueror
Respected User
Nov 20, 2017
6,345
14,982
What I find it more amusing. People saying that the AI games lack quality and complaining that "flood" of them. AI's need a lot of data to be trained and the difficulty of all that data is the quality of it. So, the more bad quality AI generate and as long as this dominate the overall quantity the overall quality of AIs will be reduce over time.
 

anne O'nymous

I'm not grumpy, I'm just coded that way.
Modder
Donor
Respected User
Jun 10, 2017
11,673
17,957
So, the more bad quality AI generate and as long as this dominate the overall quantity the overall quality of AIs will be reduce over time.
It have already be theorized (but can't find it back because I suck) that, between the measures put in place by some platforms, the Law here and there, and the number of AI generated content available on social media, AI are already trained with more than 40% of AI generated content. And obviously, the more time will pass, the more their native bias will just be reinforced through this process.
 

Maxell5

New Member
Jun 2, 2023
14
8
It have already be theorized (but can't find it back because I suck) that, between the measures put in place by some platforms, the Law here and there, and the number of AI generated content available on social media, AI are already trained with more than 40% of AI generated content. And obviously, the more time will pass, the more their native bias will just be reinforced through this process.
Wow, I didnt know it was such a large percentage already. Do you all think that AI will get to the top of improving the same styles until they are "perfected", and then keep regurgitating the same thing over and over? (a perfect cell version of what is already happening? like an unending cycle of that?)
Is AI going to "eat itself"?
 
Last edited:

anne O'nymous

I'm not grumpy, I'm just coded that way.
Modder
Donor
Respected User
Jun 10, 2017
11,673
17,957
Wow, I didnt know it was such a large percentage already.
I too was surprised, but the data used and the projection felt realist.
After, must be kept in mind that it's nothing more than an estimation. AI "owners" don't really like to communicate on their training methods and data sources, therefore researchers can just make projections based on the average rate of AI generated content present on the most easily available sites.

The 10 February, , a site that focus on the analyze of websites dedicated to news, counted 1,254 news platform with nothing else than AI generated news, just for the French language [ ]. I haven't found the number for English language, but I have no doubt that it's way more. And it's just for news.
If you take into account that the more official sites (the New York Times and co) are protected against DDoS and less easily available to bots, while those sites do their best to have a high visibility since they are easy cash grabbers, how many AIs use them when you ask a question related to the lastest news? And more globally how many use them to gather data regarding news?
It's late, so I'll not search for actual reliable numbers, but how many hundreds (thousands?) sites already do the same for images or fictional stories? If you configure everything correctly you can easily generate a dozen of new content each hour, without having to pay anyone for this, and you just have to collect the money from the ads you put on your site.

As human we tend to underestimate their numbers, because we have our bias. We prefer that news magazine and will search our information there. We already have our favorite site for that kind of image, this kind of fiction, and so on. So, we don't really search for new sites and don't see how far AI have invaded the web. Plus, if we are here, whatever how tech savvy or not we are, we aren't totally new to Internet, so we know how to instinctively filter where we will go.
But yeah, 40% seem a reliable estimation, and obviously it will just grow.


Is AI going to "eat itself"?
Well, there's a clear lack of consensus about this:
.
.
.
.

And it's just four of the many papers just for this particular question.
Personally I think that there's just no clear answer yet, because all depend on the decisions that will be made in the future. Actually people are just playing with AIs. And when I say "people" I don't limits to their users, but also those who develop them.
It's obviously a serious work, and they take it really seriously, but the technology is so complex and in regard of its long past (more than 60 years of research), it's only yesterday that it became something reliable enough. So, even those who works on it are playing. They try a way to code them, then wait, look at the result, and try to change this or that. And they do the same for all aspect, including the methods used to train them.

So, the answer to your question is "yes" if AI continue to be force-feed more than trained, since they'll be stuffed with tons and tons of data, that will be more and more often be AI generated until the "human" content become marginal. But if the training method change, passing to a more focused and filtered data feed, the answer is "no".
Alas, we are still far to that second case. It's a market that, globally, worth hundred of billions. They all want to be the first to make a significant breakthrough, and force-feeding is the only way to have a chance to actually be the first. If you take the time to perfectly train your AI, when it will be ready, you'll have something really reliable, but two, or more, generations late.
 

c3p0

Conversation Conqueror
Respected User
Nov 20, 2017
6,345
14,982
and in regard of its long past (more than 60 years of research)
Make that at least over 100 years:

it's only yesterday that it became something reliable enough.
Not really. Some apsect of it (eg. neural network) are used for decades already. You can train them and used them feedback control systems, where you don't know why something is good/bad, but you have your good/bad samples.

I was one told a story, I believes it where some motors or gear box. One employee could hear the difference between good and bad sample, yet couldn't say what precisely was the difference. They recorded the acustic from those good/bad samples and train the net. After this they could with the trained net also determined which of them where good and which where bad and, obviously, the employee wasn't the only one anymore who could do that.
 
Last edited:
  • Like
Reactions: DuniX

anne O'nymous

I'm not grumpy, I'm just coded that way.
Modder
Donor
Respected User
Jun 10, 2017
11,673
17,957
Make that at least over 100 years:
You know that if you make me feel old, you also make yourself fell old, right?


Not really. Some apsect of it (eg. neural network) are used for decades already. You can train them and used them feedback control systems, where you don't know why something is good/bad, but you have your good/bad samples.
But it was "some aspect" and they were still more close to an advanced decision tree than AIs.

One could say that beating Kasparov in 1997 prove that AI are reliable since (I don't want to count) years. But it was just basic statistical computations pushed to the extreme. Ask it to play another game, or change the rules, and the Deep Blue wouldn't know what to do, while an AI would be able to adapt. And from my point of view, it's precisely this adaptability that mark the starts of "real" (quotation marks because everything is relative) AIs, even dedicated ones.
I can't find it back, but one of the first attempt at medical AI was a basic analyze of X-ray scans. It worked correctly... as long as the patient didn't had a particularity. The AI was able to works in theoretical context (this is how the human body is expected to be), but not to actually interpret what it was seeing. Face to the X-ray of a hand with a broken finger and a missing one, it was so confused by the missing finger that it said that everything was correct. From memory, all steps of the analyze were skipped because "it's not valid datas", what led to an empty list, what then led to the default answer, "it's good man".
This while an AI is advanced enough to act with more naivety. Something that would be along the line of, "well, it's not a human since there's only four fingers, but I guess that a broken bone look the same whatever the species, and I see one".


I was one told a story, I believes it where some motors or gear box. One employee could hear the difference between good and bad sample, yet couldn't say what precisely was the difference. They recorded the acustic from those good/bad samples and train the net. After this they could with the trained net also determined which of them where good and which where bad and, obviously, the employee wasn't the only one anymore who could do that.
But change something in the gear box, and your neural networks do not works anymore. This while an AI would only need to hear the good one long enough to adapt and once again make the difference.
If it have been correctly coded, and initially trained with enough explained bad example (this sound is when this, that sound when that), it will even be able to still tell you what is the problem even if the sound for it is different. This because it don't rely on the sound, but on the difference with what sound it should make. A higher sound mean this, and lower sound mean that, a "click" and it's this thing that everyone want, and so on. And this stay true.

So, sorry but I disagree with you. While AIs are neural networks, back in time neural networks weren't AIs. And you'll have to deal with it because your very first sentence made me grumpy :cautious:
 
Dec 7, 2019
239
211
Did the maths recently on the AI CG Tag in another thread (didn't realise you'd done it here could have saved myself some time)

And AI CG is only 12.55% of all games released/updated in the last 365 days that are tagged with at least one of the CG tags. Again, supporting your final clause - "but it's not as bad as the weekly post about AI makes it out to be."
From what I could work out basically the volume of games being tagged AI CG exploded about a year ago (circa 7% growth in 180 days), grew consistently 3-2% each bracket after that, but recently (last 30 days) seems to be slowing with its first dip in growth. Hopefully this trend continues which would mean we could expect a plateau soon on new AI CG content before it breaks 20% of total new content/updates (currently sitting around 14.4%)
 
  • Yay, update!
Reactions: Count Morado

c3p0

Conversation Conqueror
Respected User
Nov 20, 2017
6,345
14,982
You know that if you make me feel old, you also make yourself fell old, right?
And? I see that all day if I count the hair that aren't there anymore.:cautious:
One could say that beating Kasparov in 1997 prove that AI are reliable since (I don't want to count) years. But it was just basic statistical computations pushed to the extreme. Ask it to play another game, or change the rules, and the Deep Blue wouldn't know what to do, while an AI would be able to adapt. And from my point of view, it's precisely this adaptability that mark the starts of "real" (quotation marks because everything is relative) AIs, even dedicated ones.
Do we have a (general) AI already that can do that?
I can't find it back, but one of the first attempt at medical AI was a basic analyze of X-ray scans. It worked correctly... as long as the patient didn't had a particularity. The AI was able to works in theoretical context (this is how the human body is expected to be), but not to actually interpret what it was seeing. Face to the X-ray of a hand with a broken finger and a missing one, it was so confused by the missing finger that it said that everything was correct. From memory, all steps of the analyze were skipped because "it's not valid datas", what led to an empty list, what then led to the default answer, "it's good man".
And there was one AI that was trained for cancer detection based on medical images. It had a 100% detection rate for 90° ruler, cause that was what they had trained it. All medical image with confirm positive result for cancer had this little ruler in one corners of the image.
But change something in the gear box, and your neural networks do not works anymore. This while an AI would only need to hear the good one long enough to adapt and once again make the difference.
If it have been correctly coded, and initially trained with enough explained bad example (this sound is when this, that sound when that), it will even be able to still tell you what is the problem even if the sound for it is different. This because it don't rely on the sound, but on the difference with what sound it should make. A higher sound mean this, and lower sound mean that, a "click" and it's this thing that everyone want, and so on. And this stay true.
Would need to have a field study for that. As motor/gear box are similar from one to another the neuronal net could be still be efficiency.
So, sorry but I disagree with you. While AIs are neural networks, back in time neural networks weren't AIs. And you'll have to deal with it because your very first sentence made me grumpy :cautious:
The term is artificial intelligent. Yes, we didn't have the "chat GPT" type of AI, but we already have other types. Each program that can learn and use the things that they have learned is an AI, as you've written eg. Deep Blue.
We are also not at the point to have an general AI that is worth it's money, cause simple then it wouldn't draw human hands with 6 fingers or invent some data for an answer and most important, in my eyes, a general AI could learn autonomously. All of the AI's I know are trained by humans and as long as that is needed they aren't a general AI - in my eyes.

From what I could work out basically the volume of games being tagged AI CG exploded about a year ago (circa 7% growth in 180 days), grew consistently 3-2% each bracket after that, but recently (last 30 days) seems to be slowing with its first dip in growth. Hopefully this trend continues which would mean we could expect a plateau soon on new AI CG content before it breaks 20% of total new content/updates (currently sitting around 14.4%)
The data for that not old enough. The only thing we know that in the set > 1 a the use of AI was much less (cause the tool "didn't" exist). For more, we would need more data.
 
  • Like
Reactions: DuniX
Dec 7, 2019
239
211
The data for that not old enough. The only thing we know that in the set > 1 a the use of AI was much less (cause the tool "didn't" exist). For more, we would need more data.
I basically worked back the accumulation for the brackets we can get access to and then broke down total percent AI CG tag over the year by data group available

Data Sets
(Newest to Oldest)
AI CG TAG (within each date set)
Percentage .00 (rounded up)
<714.94%
7-1412.82%
14-3014.46%
30-9013.25%
90-18010.21%
180-3658.55%
>3651.69%

It started high at introduction but seems to be slowing, I used percentage to work with the variable datasets rather than force them into set time brackets (I could but that would obfuscate the growth trend I was wanting to calculate)
 
Last edited:

anne O'nymous

I'm not grumpy, I'm just coded that way.
Modder
Donor
Respected User
Jun 10, 2017
11,673
17,957
And? I see that all day if I count the hair that aren't there anymore.:cautious:
Well, I have no problems on that side :cool:
I know, it's petty :p


Do we have a (general) AI already that can do that?
Not to my knowledge, but it's mostly because no one made them read the rules.

The instant won for the first time, we came really near to this. Yet to be fair it was a purely dedicated AI. But unlike Deep Blue it wasn't just storing probabilities, it learn by playing. And apparently now it can not only win at Go, but also Chess and Shogi.
Its code is probably still focused on strategy games, but I'm almost sure that it just need to be taught the rules, have some human opponents at first, then enough times to play against himself, before it can master another strategy game. And like most games have a bit of strategy in them.

So, practically no, we don't have yet a general AI that can do that, but we already have an AI that could do that.
And, since we're in the 80's, don't forget that computer that can play tic-tac-toe, chess, and total nuclear war... it's late, why are we still awake?


And there was one AI that was trained for cancer detection based on medical images. It had a 100% detection rate for 90° ruler, cause that was what they had trained it. All medical image with confirm positive result for cancer had this little ruler in one corners of the image.
I shouldn't laugh, but it's a so god example of the main issue with AIs so far. We train them, but we don't know what they are actually learning.

"A rose is a rose is a rose is a rose", Gertrude Stein haven't said it for this, but it summarize AI learning as it actually works.
[Obviously it's a rough summary]
  1. AI, this is a rose -> okay, I'll remember this.
  2. What is this (not a rose) -> A rose? -> No -> Okay, I'll remember this.
  3. What is this (a rose) -> A rose? -> Yes -> See, I remembered it.
  4. What is this (not a rose) -> not a rose? -> Yes -> I feel that I progress.
  5. Okay, now here's a bunch of images, I have other things to do, filter roses by yourself.
And it's this last part that is crucial, because the same AI guess if it's a rose or not and validate if it guessed right or not. So, if there's a particularity in each images (like a 90° ruler) before this step, it can perfectly have made the wrong assertion regarding what is a rose, and enforce that error more and more and more.
What is also the problem with AIs being trained with AI generated content. As I said previously, it just reinforce their bias. Except that like it's others AIs that generated that content, it also spread that bias between all AIs.


Would need to have a field study for that. As motor/gear box are similar from one to another the neuronal net could be still be efficiency.
Gear boxes I don't know, but motors are different. A V8 isn't V6 nor a V12, and they all clearly don't make the same sound when correctly calibrated. And even a V8 don't necessarily make the same sound than another V8.
I don't have links for that, but believe me, my dad was a motor addict, I don't count the number of car races of all type that I had to attempt or watch through TV, in my youth. At some point I was even able to tell what car was incoming just by the sound it was making; and I hated that ability.


The term is artificial intelligent.
No, it's Artificial Intelligence, the same "intelligence" than in CIA; it's only in Science Fiction that AI were expected to be smart (while it was surely also in the mind of those working on them that one day...).

The initial goal what to gather information and analyze them. It evolved, and now we have generative AI, but when you look at all the AI attempts until, globally, the 10's (excepted the recreational ones like Deep Blue or AlphaGo), it's clearly what they were doing, even when they were just trained to recognize 90° rulers.
Even something like Siri was not much more. It's just the link between pre generative AI and generative AI. One that gather its information from both ends, being told what it have to summarize. With "summarize" being to read in a lax meaning, but "what are the Chinese restaurant close to my position" is nothing more that gathering information (what are all the restaurants in a X radius), analyzing them (which one are Chinese), then presenting a summary (here's are the addresses of those restaurants).

I guess that this "generative" addition is probably part of why I make a difference between previous AIs and nowadays AIs. They don't just analyze and summarize, they now are smarts.
 

tanstaafl

Well-Known Member
Oct 29, 2018
1,734
2,203
I shouldn't laugh, but it's a so god example of the main issue with AIs so far. We train them, but we don't know what they are actually learning.

"A rose is a rose is a rose is a rose", Gertrude Stein haven't said it for this, but it summarize AI learning as it actually works.
[Obviously it's a rough summary]
  1. AI, this is a rose -> okay, I'll remember this.
  2. What is this (not a rose) -> A rose? -> No -> Okay, I'll remember this.
  3. What is this (a rose) -> A rose? -> Yes -> See, I remembered it.
  4. What is this (not a rose) -> not a rose? -> Yes -> I feel that I progress.
  5. Okay, now here's a bunch of images, I have other things to do, filter roses by yourself.
And it's this last part that is crucial, because the same AI guess if it's a rose or not and validate if it guessed right or not. So, if there's a particularity in each images (like a 90° ruler) before this step, it can perfectly have made the wrong assertion regarding what is a rose, and enforce that error more and more and more.
What is also the problem with AIs being trained with AI generated content. As I said previously, it just reinforce their bias. Except that like it's others AIs that generated that content, it also spread that bias between all AIs.
I feel like AI is recognition pretty nailed down. Here an, admittedly anecdotal, example. I used to use outlook to store images taken from my camera, I have about 12 years of images stored there. I have two cats, they look nearly identical being nearly solid gray, the only difference is one has a white spot. Outlook recently sorted them and created an album for each of them...and named them correctly, which is a bit creepy if you ask me since I don't remember ever telling onedrive my cats names nor naming any of the images.
 

c3p0

Conversation Conqueror
Respected User
Nov 20, 2017
6,345
14,982
Well, I have no problems on that side :cool:
:mad:
And, since we're in the 80's, don't forget that computer that can play tic-tac-toe, chess, and total nuclear war... it's late, why are we still awake?
Deep Thought? Oh, mean this buddy.
wargames2-231469209.gif
Gear boxes I don't know, but motors are different. A V8 isn't V6 nor a V12, and they all clearly don't make the same sound when correctly calibrated. And even a V8 don't necessarily make the same sound than another V8.
My bad, from my memory it was electric motor. Although I think it was never said that they were, I've stored it as such in my memories.;)
I guess that this "generative" addition is probably part of why I make a difference between previous AIs and nowadays AIs. They don't just analyze and summarize, they now are smarts.
And we can discuss how smart they really are.
Do they know what Chinese restaurants really are? I doubt that. That is the same why the images creating AIs have so many errors with hands. They can identify hands and recreate hands, at least to some degree. But they truly don't know what hands are. You can ask almost every human to draw a hand and it will have 5 fingers. Of course the image quality will vary a lot (eg. don't ask me to draw), but this feature that is so difficulty for an AI, you will get it right from humans in > 99% of the case.
 
Last edited: