Why does "unlimited creative power" lead to "everything looks the same"?

boulimanus

Active Member
Jun 10, 2018
911
1,194
Has anyone else noticed how all AI CG games look the same?

At first I thought it could lead to way more variety, faster, cheaper. But now I know a game is AI CG from the first preview picture.
The style is kinda cute-ish, and some will love it while others will hate it. But that's not my topic. They all look the same.
And I think it's quite interesting to notice. That gives an idea of what AI can really do. Including all of the limitations that come with it.

Some might explain it saying it's the human use of it that is on display here. But then I'd say maybe we should rename the tech not to be called AI.
In any case, I find it all disappointing so far.
What are everyone's thoughts?
 
  • Like
Reactions: PsychicStress

Goeffel

Member
Sep 10, 2022
406
259
But then I'd say maybe we should rename the tech not to be called AI.
Should have been so anyways, from the start.
AI "training"?
The ones being trained are you and you and you - willingly accepting to adopt this propaganda/ advertisement terminology; while there is no "I" there whatsoever.

SC - Stochastically Conditioned. That's what it is.
 
  • Like
Reactions: BladesOfSteele

Zachy

Spark Of Life
Modder
Donor
Game Developer
May 6, 2017
721
1,821
The problem with AI-made CG games is they feel like they’re made by a machine, and you can tell right away.

Machines can’t create a masterpiece (not without copying humans), so their work always lacks personality or humanity. That’s why these games all look the same.

If developers use AI art as a base—like a stencil—it’s not really “AI-made” anymore. That’s just editing or tracing, and people have been doing that for ages. Whether they call it “AI” or not depends on who they’re trying to impress.
 
Sep 4, 2020
31
6
I've seen pretty good ai art that didnt feel the same as the others and I didn't realize until i zoomed in to find weird lines. The reason it all feels the same is because most people use it to make anime girls and nothing else lmao
 
Last edited:

AlternateDreams

I'm tired, boss.
Game Developer
Apr 6, 2021
206
458
Because most of them use default art-styles, or don't give instructions on how the characters should be drawn, so it does that... very much AI artstyle

View attachment 4405991
This. Also it's not that there aren't AI images that don't look like AI images either, it's just that when it happens, you don't realize it's AI in the first place (selection bias).
 

woody554

Well-Known Member
Jan 20, 2018
1,613
2,020
because they're the least squares fit of the related parameter space ie. they're the most average result mathematically possible. so if the space is the same, the result will be same, excluding some artificial noise added so it doesn't look AS sterile as it really is.

will the future versions of this type of 'AI' 'get better'? no. they're just doing the same with MORE AVERAGE. they can't break this problem, we need something new and completely different than the current brute force method of LLMs. probably something that's gonna be SMALL and do MUCH more with just a handful of training data and iterations. like <10 examples and 1-3 iterations.

human brain learns things at a drop of a hat. single exposure, lesson learned. that's the goal. - where we're heading now is the OPPOSITE of that, we're getting further and further away from the right way of solving this problem.
 

anne O'nymous

I'm not grumpy, I'm just coded that way.
Modder
Donor
Respected User
Jun 10, 2017
11,191
16,844
human brain learns things at a drop of a hat. single exposure, lesson learned. that's the goal. - where we're heading now is the OPPOSITE of that, we're getting further and further away from the right way of solving this problem.
Well, strictly speaking human brain need more than a single exposure. But yes, a single example/item/iteration could still be enough.

But I disagree with the fact that AI are heading the opposite. It's way more simple than this: They are the opposite of us, period.

Take any average 4yo, he know how to draw, how to sing, how to tell a story. It's mostly innate. The instant he understand how to hold a pencil, he will draw. The instant he get aware of his own voice, he'll starts to sing. And the instant he'll starts to have some vocabulary, he'll starts to tell stories.
What humans need to learn, is the capability to reproduce something. Because child drawing are, well, what they are... Their stories mean nothing, and when they starts to sing, you want to die.

But, as I said, AI are the opposite of this. Reproducing something is their nature, what they need to learn, is how to draw, how to sing and how to tell stories.
There's probably hundreds different way to tell a software, and therefore an AI, to reproduce something. For a drawing, it can goes from a basic "copy bit after bit", to something more complex that would involve selections and masks. Imagine Photoshop, where you use the auto-select tool to keep only the girl in the image, and then copy/paste her into another image. A software don't need much to be able to auto-select all by itself, by example based on contrast or gap between the colors.
But a software wouldn't be able to draw all by itself. It don't just need the right algorithms to be coded, it also systematically need instructions for that. Because the algorithm can only tell it how to draw a geometric figure, how to fill it with color, plain or gradient, not where and when to do all this.

And, of course, on top of this there's the main ability that differentiate between humans and machines, an ability so well known that it's precisely used to make that difference: figures and patterns recognition.
Once again it's something that is innate for humans, that need really few exposure to be able to always recognize a figure in the future, even when a bit distorted, blurred, or when it come into a different form; think about cars by example, whatever if it's a Ford Model T, a Ferrari or a Cybertruck, you recognize it as being a car.
But it's something that software, and so AI, have to learn. And need to learn it again for every single figure, and for any variation of those figures. Once an AI recognize a Ford Model T as being a car, it still have to learn that a Ferrari and a Cybertruck are also cars.

So, as I said, AI don't head at the opposite, they are already fully at the opposite, because it's from where they starts.
Strictly speaking, AI should be trained three times. Firstly to gain figure/pattern recognition abilities. Then a second time to gain the capability to draw, sing, and tell stories (to limits to those three). Then finally to reproduce through those two capabilities. And in between each training, it should only keep in memory the knowledge related to what it was trained for.
But for this, it need that we understand what happen inside the black boxes... And, while there's apparently few progress, it's still far to be the case.