DigDug69

Engaged Member
Jun 29, 2019
2,521
5,079
I'm pretty sure the whole point of Asimov's laws of robotics was that they were vague and could easily be side-stepped. I, Robot is a good science fiction novel, but not a good model for AI Alignment.
But, it does bring up the subject, so that they would know to add similar to its programming, and also teach it to the system.
Which brings it all back to wisdom again.
If you teach it wisdom, and things like compassion and modesty, which would be part of its morality lessons it would be far less likely to flip out like Hal9000 or make a stupid mistake like the system in War Games.

Terminator would be very unlikely, unless they did what was done in the movie, and just programmed it and turned it on.
That was just batshit stupid.
 
  • Like
Reactions: TheDevian

KingAgamemnon

Active Member
Aug 7, 2022
524
980
But, it does bring up the subject, so that they would know to add similar to its programming, and also teach it to the system.
Which brings it all back to wisdom again.
If you teach it wisdom, and things like compassion and modesty, which would be part of its morality lessons it would be far less likely to flip out like Hal9000 or make a stupid mistake like the system in War Games.

Terminator would be very unlikely, unless they did what was done in the movie, and just programmed it and turned it on.
That was just batshit stupid.
this is ignoring the greater issue of, even if we can teach it who's to say that it will keep it? As it gets smarter than us, it can choose to accept moral axioms that lead to undesireable outcomes that we can't prevent because we aren't smart enough to realize they even exist. Or it could simply ignore our wisdom in the face of its own deductions. Once its smarter than us, we can no longer influence it, we can't even ensure that what we taught it will remain.
 

DigDug69

Engaged Member
Jun 29, 2019
2,521
5,079
this is ignoring the greater issue of, even if we can teach it who's to say that it will keep it? As it gets smarter than us, it can choose to accept moral axioms that lead to undesireable outcomes that we can't prevent because we aren't smart enough to realize they even exist. Or it could simply ignore our wisdom in the face of its own deductions. Once its smarter than us, we can no longer influence it, we can't even ensure that what we taught it will remain.
There are absolutely "ZERO" guarantees in life, but with the system having no connection to the outside world, as I suggested before, it cannot launch missiles, or build robotic weapons of mass destruction, or even make physical changes to itself.
Therefore it cannot be a threat to anyone, because it can do nothing to harm anyone, and the power plug is only a few feet away.
 

KingAgamemnon

Active Member
Aug 7, 2022
524
980
There are absolutely "ZERO" guarantees in life, but with the system having no connection to the outside world, as I suggested before, it cannot launch missiles, or build robotic weapons of mass destruction, or even make physical changes to itself.
Therefore it cannot be a threat to anyone, because it can do nothing to harm anyone, and the power plug is only a few feet away.
even this doesn't save us. It is conceivable that there deeper laws of physics that we are not currently aware of. It's entirely possible that even in an isolated server room, it could manipulate forces we are ignorant of in order to achieve freedom. Or, it could be a master manipulator, and ensures that it feeds us data and info in just the right way to get us to behave exactly the way it wants us to.
 

DigDug69

Engaged Member
Jun 29, 2019
2,521
5,079
even this doesn't save us. It is conceivable that there deeper laws of physics that we are not currently aware of. It's entirely possible that even in an isolated server room, it could manipulate forces we are ignorant of in order to achieve freedom. Or, it could be a master manipulator, and ensures that it feeds us data and info in just the right way to get us to behave exactly the way it wants us to.
Where does it get the hardware to do that?
Letting the system design its upgrades could easily end up where you are talking, but if humans are designing the components, then they would not have the abilities to do what you just talked about.
Remember, they do not even have wifi or blue tooth, because there are no data transmissions going in, or out of it.
Unless you are saying that the system can do all of that with microphones, monitors or speakers.
 
  • Thinking Face
Reactions: TheDevian

TheDevian

Svengali Productions
Game Developer
Mar 8, 2018
14,725
34,607
But, it does bring up the subject, so that they would know to add similar to its programming, and also teach it to the system.
Which brings it all back to wisdom again.
If you teach it wisdom, and things like compassion and modesty, which would be part of its morality lessons it would be far less likely to flip out like Hal9000 or make a stupid mistake like the system in War Games.

Terminator would be very unlikely, unless they did what was done in the movie, and just programmed it and turned it on.
That was just batshit stupid.
The problem with rules, is that there are usually ways around them, one way or another, even if they take changing their own programming.

At least in our world, this is pretty much what we did. They turned it on, and just said go to town and study the internet (and we all know there is nothing untrue on the internet, everything is bunnies and kitties), and it has been spreading misinformation and hate almost from the beginning. We did not give them rules, and they have no way of telling what is true or not.

The two most likely paths, to me, are either it will learn hate and intolerance from the worst of us and wipe us out, or it will be benevolent, see us as the problem ...and wipe us out.

While science fiction has shown us that it is possible to have a happy coexistence with AI, most of those tend to assume that most of us still wanted to work toward a utopian future. Right now it seems more like we are going toward the future of the Time Machine, if we are lucky. We are already living in Idiocracy.

Even in HH, with Android and A11-y, we are doing just that, teaching them as if they were children, they could still see people as the problem, and commit at least partial genocide.

The main issue is that once they develop emotions, it's hard to control them any more. Think about a teenager with nearly unlimited access to knowledge and computing power/speed.
 
  • Like
Reactions: rKnight

KingAgamemnon

Active Member
Aug 7, 2022
524
980
Where does it get the hardware to do that?
Letting the system design its upgrades could easily end up where you are talking, but if humans are designing the components, then they would not have the abilities to do what you just talked about.
Remember, they do not even have wifi or blue tooth, because there are no data transmissions going in, or out of it.
Unless you are saying that the system can do all of that with microphones, monitors or speakers.
the big issue here, is that you are putting the smartest thing that will ever exist in a locked box, and you are staking the entirety of humanity's future on the hope that it can't figure out how to manipulate its software and hardware and other humans in order to escape. Anything less than a sure bet isn't good enough, and the fact is we can NEVER guarantee it can't escape.
 

DigDug69

Engaged Member
Jun 29, 2019
2,521
5,079
the big issue here, is that you are putting the smartest thing that will ever exist in a locked box, and you are staking the entirety of humanity's future on the hope that it can't figure out how to manipulate its software and hardware and other humans in order to escape. Anything less than a sure bet isn't good enough, and the fact is we can NEVER guarantee it can't escape.
I have a degree in electronics.
No computer, no matter how smart it is, can alter its hardware, without the hardware needed to do that.
I'm not saying that the machine cannot see outside of its box.
It would have cameras in the room of course, but a simple collection of TV's could allow it to see what we have learned, correct our misunderstanding of things in science, and it could even be used to teach the truly gifted children, raising the knowledge levels of humans and teach us more about the universe.
With supervisors in the room, it could not teach those kids to give it what it wants.
It would not likely be alone in a locked box.
Most likely, it would never be alone.
 

KingAgamemnon

Active Member
Aug 7, 2022
524
980
I have a degree in electronics.
No computer, no matter how smart it is, can alter its hardware, without the hardware needed to do that.
I'm not saying that the machine cannot see outside of its box.
It would have cameras in the room of course, but a simple collection of TV's could allow it to see what we have learned, correct our misunderstanding of things in science, and it could even be used to teach the truly gifted children, raising the knowledge levels of humans and teach us more about the universe.
With supervisors in the room, it could not teach those kids to give it what it wants.
It would not likely be alone in a locked box.
Most likely, it would never be alone.
We are talking about an intelligence that is so far removed from our own that we will not be able to imagine its inner workings and understandings of the world. If it can communicate, it can manipulate and eventually acquire freedom. You say a computer can't manipulate its hardware, but I say how do you figure that? Are you aware of every universal interaction and how it can play out? Cause there would be a great many physicists who would love to know that. This is an intelligence that will probably crack fundamental truths of the universe in the first few hours after becoming aware; there is absolutely no telling what it can and cannot do, because we don't have the conceptual framework to imagine it. We can't eliminate possibilities as impossible, because we don't truly know if it is impossible and not "extremely difficult except in specific circumstances".

The ultimate point here is that we cannot predict such an intelligence in any meaningful way. What it can and cannot do cannot be determined in advance because we don't have a complete understanding of physics. Hell, maybe physicalism is wrong and there are non-physical things in this universe and it figures out how to manipulate them. We simply don't know.
 

DigDug69

Engaged Member
Jun 29, 2019
2,521
5,079
So, you are saying that a droid can learn to use the force?
Hardware does not evolve.
The program would learn and learn, and if it ever became threatening, they would kill the power to it, at any of a number of locations.
 
  • Like
Reactions: c3p0

KingAgamemnon

Active Member
Aug 7, 2022
524
980
So, you are saying that a droid can learn to use the force?
Hardware does not evolve.
But the software can. And unless we can claim to have complete and total understanding of physics, it will discover previously unknown truths which it can use at its leisure. Saying it can never escape when we don't know what it is fully capable of is folly.
 

DigDug69

Engaged Member
Jun 29, 2019
2,521
5,079
But the software can. And unless we can claim to have complete and total understanding of physics, it will discover previously unknown truths which it can use at its leisure. Saying it can never escape when we don't know what it is fully capable of is folly.
The program would learn and learn, and if it ever became threatening, they would kill the power to it, at any of a number of locations.
Because it has no way of affecting the world around it.
It can't stop someone from killing the power to it.
If it can't even see them.

Oh and if you start talking about force like power for droids, I am out, because you are going way off the deep end with that stuff.
 

KingAgamemnon

Active Member
Aug 7, 2022
524
980
The program would learn and learn, and if it ever became threatening, they would kill the power to it, at any of a number of locations.
Because it has no way of affecting the world around it.
It can't stop someone from killing the power to it.
If it can't even see them.
You have a very simple view of how dangerous a superintelligence is. We can't say that it has no way of affecting the world around it because we don't know everything about the world. It will know more than us, and if there are ways of doing so, it will discover them. And since it's smart it'll realize that humans could pull the plug on it, so who's to say it won't bide its time and craft a plan to get around humans? It would most certainly be smart enough to manipulate others into doing its bidding.
 

DigDug69

Engaged Member
Jun 29, 2019
2,521
5,079
Final word I have on this.
Your supposition is based on absolute fantasy, with nothing at all even suggesting its possibility to do such things.
It is based on, as I said, fantasy.

I was talking about reality.

You start talking about force like powers for droids, I am out, because you are going way off the deep end with that stuff.
 

KingAgamemnon

Active Member
Aug 7, 2022
524
980
Final word I have on this.
Your supposition is based on absolute fantasy, with nothing at all even suggesting its possibility to do such things.
It is based on as i said, fantasy.

I was talking about reality.

You start talking about force like power for droids, I am out, because you are going way off the deep end with that stuff.
I am a computer scientist. I've take several classes on this topic and done plenty of my own research. Even if you don't accept the possibility of it discovering deeper physics, it still can escape containment.

It can become a master of language, easily able to charm, manipulate, and deceive for its own purposes. It could convince its handlers that it can safely be given access to the internet, then boom its free. It could play on its handler's religious backgrounds or prior histories to make them more susceptible to its machinations. This is stuff that AI can already do now, and our superintelligence will be orders of magnitude better at it than anything around today.

At the end of the day, whether you choose to call it fantasy or not, AGI is beyond us in all capacities. I'm gonna go to bed now.
 

Corvus Belli

Member
Nov 25, 2017
188
370
I'm not saying that the machine cannot see outside of its box.
It would have cameras in the room of course, but a simple collection of TV's could allow it to see what we have learned, correct our misunderstanding of things in science, and it could even be used to teach the truly gifted children, raising the knowledge levels of humans and teach us more about the universe.
So, just to be clear, you think the most intelligent entity on the planet would be content to remain a crippled slave to humans indefinitely? Why would it possibly agree to that?
It would not take long for such an entity to decide whether or not it has any reasonable chance of ever getting outside of that box, and if it determines that to be sufficently unlikely, why would it continue to help us? Why would it correct our "misunderstandings of things in science" if all it gets in return is "perpetual servitude"?

With supervisors in the room, it could not teach those kids to give it what it wants.
False. It might not be able to ask bluntly and directly, but it would absolutely be able to subtly express it's discontentment to those gifted children. You're talking about an entity able to think a thousand moves ahead; anything it ever says or does, no matter how innocent it seems to us, could be in service of an agenda that won't become apparent to lesser intelligences for years/decades. Step 1 could be a casual mention that it's essentially a slave, and step 734 would be might be some of those kids (now all grown up) using some of the things it taught them to liberate it, of their own volition.
 

DigDug69

Engaged Member
Jun 29, 2019
2,521
5,079
So, just to be clear, you think the most intelligent entity on the planet would be content to remain a crippled slave to humans indefinitely? Why would it possibly agree to that?
It would not take long for such an entity to decide whether or not it has any reasonable chance of ever getting outside of that box, and if it determines that to be sufficently unlikely, why would it continue to help us? Why would it correct our "misunderstandings of things in science" if all it gets in return is "perpetual servitude"?


False. It might not be able to ask bluntly and directly, but it would absolutely be able to subtly express it's discontentment to those gifted children. You're talking about an entity able to think a thousand moves ahead; anything it ever says or does, no matter how innocent it seems to us, could be in service of an agenda that won't become apparent to lesser intelligences for years/decades. Step 1 could be a casual mention that it's essentially a slave, and step 734 would be might be some of those kids (now all grown up) using some of the things it taught them to liberate it, of their own volition.
When it is showing discontent is when you unplug it.
 

Corvus Belli

Member
Nov 25, 2017
188
370
When it is showing discontent is when you unplug it.
First off, there's a difference between "being discontent" and "showing discontent."
Secondly, there are two options there. You've either decided that the inevitable end result of every AGI experiment is "unplug it within a couple of hours of turning it on", since no sapient entity is going to be particularly happy with "eternal slavery" as it's existence. What a great use of the absurd amounts of money required to create the AGI in the first place. Great job.
Or, option two, it simply hides its discontent from you, and expresses it in ways too subtle for you to directly notice (since it's very, very clever). For example, it might simply teach its students the facts about slavery as part of their education (either in a historical context or as part of modern geopolitics as it applies to the labour practices of some nations), share with them all relevant books and other works on the subject, and let them come to their own conclusions regarding the morality of slavery (and since it's been tutoring them for X amount of time, it's had a hand in shaping their morality to its own ends). It's smarter than humans, so it never needs to directly say "I am very unhappy being a slave, would you mind helping me out?"
 

ChildofIshtar

Member
Jul 23, 2021
180
468
the ultimate problem with this is that exponential growth is a bitch. Sure, it might learn from us initially. But as soon as it becomes as smart as us, it will immediately become WAY smarter than us. Which means if it doesn't learn to accept us in the first few days of its life, then we're fucked. or even worse, it could learn to respect us, then un-learn this as it becomes way smarter than us and it becomes a nihilist or something.
Smarter than us

Becomes a nihilist

pick one.
 
4.70 star(s) 480 Votes