AI is just a catchall term many people like to use that doesn't actually apply to what they're using it for.
Yeah, nowadays there's supposedly AI everywhere, but it's not AI. Most of the time it's just advanced weighted choices ; something like "should I show him/her this recommendation or not ? Oh, he looked 15 videos in a row that share the same main tag, I definitively should do it."
It's the same for the said AI that drive cars. They don't think, they recognize forms and take a decision according to a weight table and/or a decision tree. They can do basic reasoning, like "this is a car, it's in front of me, it's the same car that 1 second ago, but it's now nearer, I need to decrease the speed", but globally that's all.
Would they be real AI, therefore really able to think, their development wouldn't pass by the long trial and error that we actually witness. It would suffice to teach the Highway Code to the machine.
We have NO true A.I., nor the capability to make one,,, yet.
There's some, but actually they are, almost, just learning machines. By itself it's already marvelous, but they can't do much because they still don't know how to apply the knowledge they learned. Or, more precisely, they don't know how to apply this knowledge outside of the way they are coded for.
Take facial recognition software by example. They can be trained to go further than simple facial recognition, and by example learn to recognize signs of suffering. They are small signs, sometimes almost perceptible, they vary depending of the suffering, and they are altered by the geometry of the face. But we have the knowledge to make "AI" that could learn to recognize them whatever the person.
It would be a nice addition, with all the security cameras we have, it could surely save life if each one could throw a big "WARNING, heart attack detected" and other, "WARNING, high risk of epileptic crisis", on the screen. But it's all those "AI" could do. If everyone in the center is busy, by example because there's a major crisis at the exact same time, the machine will just throw its warning in the void. Simply because it's how it is coded and not being a real AI, it can't do more that what it's coded for.
This will a real AI would have been able to recognize the lack of reaction, look at the context and either call 911, use the nearest speaker to send a "Your attention, is there a medic around here ? The person with a blue shirt is having a heart attack and no one else is available at the moment", or whatever feel to be the best solution at this moment.
I was told long ago that if you have to use any kind of an "if" statement to write a program it will never be an A.I. because then you're taking the decision making out of it's hands. Or something to that effect.
It's a little more complicated than that. Basically speaking, AI are a software running inside a software.
Ideally, humans wrote a software that simulate how our brain physically works, and there we will need to have
if
and other conditional branching structures. After all, our organism already works like that. Our immune system works accordingly to a decision tree that is updated during all our life. What in terms of code is expressed by
IF infection looks like this THEN produce those antibody ELSE continue with the next entry in the immunity dictionary
.
And it's the same for our brain. It's over simplification, but it's also a big decision tree. The reaction we will have to an impulse will depend of the route it took through our neurons. This with each neuron being a, possibly multi state,
if
.
Then, inside this simulation of a physical brain, would run a second software, that would be the AI itself. And there, yes, any conditional branching would be use imposing our will. But, as I said, when proceeded into our brain, this software massively rely on
if
...
It's a pure contradiction. To develop an effective independent AI, we need to do something that is at the opposite of the behavior we try to simulate. Reason why an AI have to be a software another software. We need to differentiate the mechanical part, that rely on conditions, and the behavior, where any condition is to avoid at all cost. Basically speaking, the AI receive the impulse, send it to its host software, then the interpretation and reaction will depend where the impulse return back to the AI.
To keep my car driving example, the impulse would be "I see something", and the host process would pass through "is it a car ?", "is it in front of us ?", "is it the same than the one we seen 1 second ago ?", "is it nearer now ?". The impulse will come back at a place that the AI would have learned to recognize as meaning "hey, fuck, we go too fast". Then what will happen depend. If it's a depressive AI, it will perhaps accelerate to kill itself. If it's a sane AI, it should decide to reduce the speed. Perhaps that a parallel processing have told it that the second lane is free, and the AI will decide to keep this speed and just pass to the said second lane. It can also happen that the AI recognized a danger and keep this speed, for the other car to hit it, and be stopped in its wild descent because the brakes broke.
This being to oppose to what actual car driving machines do. They don't know that they go too fast, they don't care about the reason why the other car is now nearer, and they possibly don't understand that there's a second lane. What they know is that "if the value of X is equal to Y, the action to do is to reduce the speed", and it's what they'll do. They don't think, they follow orders.
Be noted that here "host software" is to take in the most vast possible meaning of "software". It can be a language interpreter, it can be a simulator, it can be a decision tree algorithm. It can also be "not a software at all", and be a pure physical circuit or a dedicated microchip. Whatever can be able to proceed the software of the AI itself.
To add more to this, there are already programs people claim to be "AI" that write music, it's fucking horrid but they exist.
Yeah, they are just the learning machines I talk about above. They looks at tons of successful songs, recognize what they have in common, then apply this knowledge. But as I said, they lack of emotions, and therefore they are missing the most important part.
Take the opening of Beethoven's 5th symphony by example. It's the most known piece of classical music all around the world. Even if you don't know what it is, almost everyone will go, "hey, I know that, I don't dislike that". But while a learning machine would recognize the scheme behind it, any attempt to reproduce this success would be a failure. From their point of view of machines, the success would be due to the fact that each motif is repeated twice, while being a little more complex than the previous one.
But from our point of view of humans, it's the heroic feeling, followed by a melancholic one, carried by the music that make its success. While a machine would try to mimic the scheme, a human would try to mimic the feelings.