Easter Sale - 70% OFF: Create and FUCK Your Own AI Girlfriend! TRY FOR FREE
x

We are getting a little too many AI games

DuniX

Well-Known Member
Dec 20, 2016
1,323
870
And it's just four of the many papers just for this particular question.
Personally I think that there's just no clear answer yet, because all depend on the decisions that will be made in the future. Actually people are just playing with AIs. And when I say "people" I don't limits to their users, but also those who develop them.
It's obviously a serious work, and they take it really seriously, but the technology is so complex and in regard of its long past (more than 60 years of research), it's only yesterday that it became something reliable enough. So, even those who works on it are playing. They try a way to code them, then wait, look at the result, and try to change this or that. And they do the same for all aspect, including the methods used to train them.
For fucks sake, there are researchers behind the training data, if the models were really getting worse they would figure something out and do some housecleaning.
Just because they are stealing everything that isn't nailed down on the internet doesn't mean they are stupid.

Also there is a great misunderstanding on what "Synthetic Data" is, if you Render something on Daz that is "Synthetic Data", and if they don't screw up the material shaders and lighting, that is already Physically Based Rendering that is supposed to be fairly close to reality.
They ultimately need 3D Scene Generation for the Robots to understand their environment and have a Sandbox to Simulate things in, so more Synthetic Data is inevitable once they figure out 3D Scene Generation, they just render that and be done. For some reason people forget that Rendering is a thing.

And it's this last part that is crucial, because the same AI guess if it's a rose or not and validate if it guessed right or not. So, if there's a particularity in each images (like a 90° ruler) before this step, it can perfectly have made the wrong assertion regarding what is a rose, and enforce that error more and more and more.
What is also the problem with AIs being trained with AI generated content. As I said previously, it just reinforce their bias. Except that like it's others AIs that generated that content, it also spread that bias between all AIs.
Why do you think we keep jamming more and more Data in them, and why the AI bros are so obsessed how Large their dicks models are?
We have gone a bit beyond the AI understanding what a "Rose" is.
The "Concepts" the AI is learning right now is much more Abstract, including a rudimentary form of System 1 Reasoning.
Remember System 1 Thinking is entierly instinctual and based on the prior experience and patterns, exactly what you get from jamming a large amount of Data will get you.
 

anne O'nymous

I'm not grumpy, I'm just coded that way.
Modder
Donor
Respected User
Jun 10, 2017
11,671
17,944
Deep Thought? Oh, mean this buddy.
View attachment 4756036
Yes, that marvel ;)


My bad, from my memory it was electric motor. Although I think it was never said that they were, I've stored it as such in my memories.;)
In this case, it's probable that they actually make the same sound, yes.


And we can discuss how smart they really are.
It's my lunch break so I'll not search for a more complete news, but take a look at .
Face to a captcha, ChatGPT used a regular app to have a human solve it. And like it was funny since it was a captcha that had to be solved, the human said something along the line of "ah ah ah, are you a bot?"
Now, here's the thing, the red team had access to a console that present what can be named "ChatGPT thoughts". And the console clearly shown that the AI was fully aware that it should not answer "yes". So ChatGPT lied, saying that it's visually impaired and really needed to access that site.

Now, is an AI that ask for human help when it can't solve a challenge, while being aware that it can lie regarding the context, and actually lie, proof that ChatGPT is smart, or not?
I let you decide what your answer will be... And I decline all responsibility if starting now you have nightmares.


Do they know what Chinese restaurants really are? I doubt that.
They don't need to know. Remember what I said about the difference between neuronal networks and AIs. AIs are advanced enough to be naive, and it's the same here.
Globally speaking, an "old school" neuronal networks would need to know what a Chinese restaurant is in order to filter the list of restaurant, an AI just need to know what define a Chinese restaurant. And this what can be whatever thing. Perhaps the name of the restaurant, perhaps one particular meal in the menu, perhaps the name of the owner, we don't really know. And obviously, the AI don't understand why "this" make the restaurant Chinese, just that it make it.

Consider a 3yo child. It will be perfectly able to identify a TV or a car. But did he know what a TV and a car are? If you ask him, there's chance that for him a TV is some magical devices that show you what you want to see. Because it's how he perceive it, his parent turn it on and, magic, there's precisely what they want to see at this moment that appear on the screen.
He don't know that there's a schedule for all the content, and that you have to look at it and turn the TV on at the right time and on the right channel. He don't understand that all around the country there's millions people who are looking the same content at the same instant. For him, with his perception of the world, a TV is something that adapt to each individual.
Still, not knowing all this doesn't mean that the child is dumb.

And globally it's the same for AIs. ChatGPT know that it have to lie and not say that it's an AI. But did it know what an AI is, or why it have to lie? I hope I'm pretty sure that the answer is "no".
I guess that among all the data that have been used to train it, there's many that said that people are afraid of AI. So ChatGPT lied based on the double knowledge that humans are afraid of AI and that it's an AI; "I must not scare him else he will not help me". But all this without necessarily understanding what an AI is, nor why they scare some humans.
Smart, but with a comprehension of the world really limited, therefore smarts like a 3yo.


That is the same why the images creating AIs have so many errors with hands. They can identify hands and recreate hands, at least to some degree. But they truly don't know what hands are.
Exactly. And like hands are generally the less visible part, while fingers are small and therefore present with less details in the images used to train them, they struggle to understand how they looks.
There's the same issue with the eyes, and more precisely where someone look at. Most of the images used to train them about humans are photography, therefore people who look at the photograph. What lead to AI generated humans to mostly look straight at you, even when it don't match the head position.
They know what eyes looks like, they know where eyes must be on the face, but like they don't know what eyes are used for, if you don't explicitly tell them where the sight must be, they'll tend to make them look at the camera; without knowing what a camera is.


You can ask almost every human to draw a hand and it will have 5 fingers. Of course the image quality will vary a lot (eg. don't ask me to draw), but this feature that is so difficulty for an AI, you will get it right from humans in > 99% of the case.
Once again, smarts but like a 3yo; two legs, two arms, one head and one body, look mom, I drawn you.
It's, still globally, the comprehension that an AI have of what it drawn. But the difference is that an AI can drawn like a master painter, so you'll not have just five lines and a circle, like the 3yo did.


Edit: fixed for messed format.
 
Last edited:
  • Thinking Face
Reactions: c3p0

anne O'nymous

I'm not grumpy, I'm just coded that way.
Modder
Donor
Respected User
Jun 10, 2017
11,671
17,944
For fucks sake, there are researchers behind the training data, if the models were really getting worse they would figure something out and do some housecleaning.
:ROFLMAO:


Also there is a great misunderstanding on what "Synthetic Data" is, if you Render something on Daz that is "Synthetic Data", [...]
And you wrote this part, that is totally unrelated to the discussion, because?


We have gone a bit beyond the AI understanding what a "Rose" is.
What then mean that they are smarter than you, since they wouldn't have so obviously missed the point.

I mean, I explicitly said that the AI validate, or not, its own guess. Therefore it's obvious that I know that they have more than "a rudimentary form of System 1 Reasoning", since I wrote that they have more than "a rudimentary form of System 1 Reasoning".
 

DuniX

Well-Known Member
Dec 20, 2016
1,323
870
And you wrote this part, that is totally unrelated to the discussion, because?
Because you think they are feeding the AI any random stuff.
The amount of Data is Infinite because the amount of Synthetic Data is Infinite.
They can curate the Data if they want, they are not that desperate.
 

c3p0

Conversation Conqueror
Respected User
Nov 20, 2017
6,342
14,970
The amount of Data is Infinite because the amount of Synthetic Data is Infinite.
The amount of data is finite. It may be very large, but it is finite. The amount of energy we have is finite and the amount of time we have is finite, therefore the amount of data is finite too.
 
  • Yay, update!
Reactions: Count Morado

DuniX

Well-Known Member
Dec 20, 2016
1,323
870
What then mean that they are smarter than you, since they wouldn't have so obviously missed the point.

I mean, I explicitly said that the AI validate, or not, its own guess.
The point is we have gone beyond there being an "Error". It has already learned a lot of things "properly" just like a human.
Where there can still be "Error" is in more High Level Abstract and Subtle Concepts.