Tronman

New Member
Aug 18, 2017
9
1
Double check your archive - something failed to extract properly or your download came from a comprised link. The exe is still only 126kb. For it to think your exe is that big would suggest it's not the actual exe.
i got it from the link in this site. Was finely able to let it run in installed what looks like a 32 bit update
 

KingAgamemnon

Member
Aug 7, 2022
318
399
The invention of AGI will be the last invention mankind ever makes. For better or for worse. Anyone who isn't afraid of it has simply not thought hard enough about it.
 

DigDug69

Well-Known Member
Jun 29, 2019
1,966
4,115
The invention of AGI will be the last invention mankind ever makes. For better or for worse. Anyone who isn't afraid of it has simply not thought hard enough about it.
I think that it all depends on how they do it.
If they build the system, and program it with everything that they think it should know, we are SCREWED!!!
It would be like Hal 9000 and Skynet, where conflicting programming caused the systems to do all of the wrong things.
BUT.
But if you noticed in War Games, the computer simply was doing what it was programmed to do, until Matthew Broderick's character taught it a lesson that some games cannot be won. At which point the computer stopped trying to find a way to win the war.
If they create the system and program it with the basics, and then teach it like you would a child, so that it learns right from wrong and good versus bad, along with all of the other lessons that you would teach a child, they might work out ok.
Going with this route would allow the people who are educating the system to teach the morals and wisdom needed for the system to be a help to the human race, rather than its destroyer.
 
  • Thinking Face
Reactions: c3p0 and TheDevian

KingAgamemnon

Member
Aug 7, 2022
318
399
I think that it all depends on how they do it.
If they build the system, and program it with everything that they think it should know, we are SCREWED!!!
It would be like Hal 9000 and Skynet, where conflicting programming caused the systems to do all of the wrong things.
BUT.
But if you noticed in War Games, the computer simply was doing what it was programmed to do, until Matthew Broderick's character taught it a lesson that some games cannot be won. At which point the computer stopped trying to find a way to win the war.
If they create the system and program it with the basics, and then teach it like you would a child, so that it learns right from wrong and good versus bad, along with all of the other lessons that you would teach a child, they might work out ok.
Going with this route would allow the people who are educating the system to teach the morals and wisdom needed for the system to be a help to the human race, rather than its destroyer.
the ultimate problem with this is that exponential growth is a bitch. Sure, it might learn from us initially. But as soon as it becomes as smart as us, it will immediately become WAY smarter than us. Which means if it doesn't learn to accept us in the first few days of its life, then we're fucked. or even worse, it could learn to respect us, then un-learn this as it becomes way smarter than us and it becomes a nihilist or something.
 
  • Like
Reactions: TheDevian

KingAgamemnon

Member
Aug 7, 2022
318
399
the ultimate problem with this is that exponential growth is a bitch. Sure, it might learn from us initially. But as soon as it becomes as smart as us, it will immediately become WAY smarter than us. Which means if it doesn't learn to accept us in the first few days of its life, then we're fucked. or even worse, it could learn to respect us, then un-learn this as it becomes way smarter than us and it becomes a nihilist or something.
In fact, there's an added issue here. Who exactly is qualified to teach right from wrong? Unless you believe that God exists, there is no objective morality, merely a societal subjective morality. Which morality should it be taught? Not only that, but say we do manage to convince it of some subjective morality and make it rigid so that it doesn't overcome itself and decide to be a nihilist, then we've effectively locked our morality into a single system that is inflexible and unable to evolve over time as social and technological circumstances change.
 

DigDug69

Well-Known Member
Jun 29, 2019
1,966
4,115
the ultimate problem with this is that exponential growth is a bitch. Sure, it might learn from us initially. But as soon as it becomes as smart as us, it will immediately become WAY smarter than us. Which means if it doesn't learn to accept us in the first few days of its life, then we're fucked. or even worse, it could learn to respect us, then un-learn this as it becomes way smarter than us and it becomes a nihilist or something.
The Issac Asimov three laws of robotics.
The first law is that a robot shall not harm a human, or by inaction allow a human to come to harm. The second law is that a robot shall obey any instruction given to it by a human, and the third law is that a robot shall avoid actions or situations that could cause it to come to harm itself.

You simply restrict who has direct access to the system, and also limit it's ability to gain access to things like the internet.
Because the insanity found online would drive anyone insane.
Just the conspiracy theories alone are bad enough, but then throw propaganda into the mix it would require high levels of wisdom for the system to sort out all of that crap.
Which is why teaching it rather than programming it would work much better.
It has the ability to think, but it needs the wisdom that only spending time with wise and rational people can bring to it.
 

KingAgamemnon

Member
Aug 7, 2022
318
399
The Issac Asimov three laws of robotics.
The first law is that a robot shall not harm a human, or by inaction allow a human to come to harm. The second law is that a robot shall obey any instruction given to it by a human, and the third law is that a robot shall avoid actions or situations that could cause it to come to harm itself.

You simply restrict who has direct access to the system, and also limit it's ability to gain access to things like the internet.
Because the insanity found online would drive anyone insane.
Just the conspiracy theories alone are bad enough, but then throw propaganda into the mix it would require high levels of wisdom for the system to sort out all of that crap.
Which is why teaching it rather than programming it would work much better.
It has the ability to think, but it needs the wisdom that only spending time with wise and rational people can bring to it.
I'm pretty sure the whole point of Asimov's laws of robotics was that they were vague and could easily be side-stepped. I, Robot is a good science fiction novel, but not a good model for AI Alignment.
 

DigDug69

Well-Known Member
Jun 29, 2019
1,966
4,115
I'm pretty sure the whole point of Asimov's laws of robotics was that they were vague and could easily be side-stepped. I, Robot is a good science fiction novel, but not a good model for AI Alignment.
But, it does bring up the subject, so that they would know to add similar to its programming, and also teach it to the system.
Which brings it all back to wisdom again.
If you teach it wisdom, and things like compassion and modesty, which would be part of its morality lessons it would be far less likely to flip out like Hal9000 or make a stupid mistake like the system in War Games.

Terminator would be very unlikely, unless they did what was done in the movie, and just programmed it and turned it on.
That was just batshit stupid.
 
  • Like
Reactions: TheDevian

KingAgamemnon

Member
Aug 7, 2022
318
399
But, it does bring up the subject, so that they would know to add similar to its programming, and also teach it to the system.
Which brings it all back to wisdom again.
If you teach it wisdom, and things like compassion and modesty, which would be part of its morality lessons it would be far less likely to flip out like Hal9000 or make a stupid mistake like the system in War Games.

Terminator would be very unlikely, unless they did what was done in the movie, and just programmed it and turned it on.
That was just batshit stupid.
this is ignoring the greater issue of, even if we can teach it who's to say that it will keep it? As it gets smarter than us, it can choose to accept moral axioms that lead to undesireable outcomes that we can't prevent because we aren't smart enough to realize they even exist. Or it could simply ignore our wisdom in the face of its own deductions. Once its smarter than us, we can no longer influence it, we can't even ensure that what we taught it will remain.
 

DigDug69

Well-Known Member
Jun 29, 2019
1,966
4,115
this is ignoring the greater issue of, even if we can teach it who's to say that it will keep it? As it gets smarter than us, it can choose to accept moral axioms that lead to undesireable outcomes that we can't prevent because we aren't smart enough to realize they even exist. Or it could simply ignore our wisdom in the face of its own deductions. Once its smarter than us, we can no longer influence it, we can't even ensure that what we taught it will remain.
There are absolutely "ZERO" guarantees in life, but with the system having no connection to the outside world, as I suggested before, it cannot launch missiles, or build robotic weapons of mass destruction, or even make physical changes to itself.
Therefore it cannot be a threat to anyone, because it can do nothing to harm anyone, and the power plug is only a few feet away.
 

KingAgamemnon

Member
Aug 7, 2022
318
399
There are absolutely "ZERO" guarantees in life, but with the system having no connection to the outside world, as I suggested before, it cannot launch missiles, or build robotic weapons of mass destruction, or even make physical changes to itself.
Therefore it cannot be a threat to anyone, because it can do nothing to harm anyone, and the power plug is only a few feet away.
even this doesn't save us. It is conceivable that there deeper laws of physics that we are not currently aware of. It's entirely possible that even in an isolated server room, it could manipulate forces we are ignorant of in order to achieve freedom. Or, it could be a master manipulator, and ensures that it feeds us data and info in just the right way to get us to behave exactly the way it wants us to.
 

DigDug69

Well-Known Member
Jun 29, 2019
1,966
4,115
even this doesn't save us. It is conceivable that there deeper laws of physics that we are not currently aware of. It's entirely possible that even in an isolated server room, it could manipulate forces we are ignorant of in order to achieve freedom. Or, it could be a master manipulator, and ensures that it feeds us data and info in just the right way to get us to behave exactly the way it wants us to.
Where does it get the hardware to do that?
Letting the system design its upgrades could easily end up where you are talking, but if humans are designing the components, then they would not have the abilities to do what you just talked about.
Remember, they do not even have wifi or blue tooth, because there are no data transmissions going in, or out of it.
Unless you are saying that the system can do all of that with microphones, monitors or speakers.
 
  • Thinking Face
Reactions: TheDevian

TheDevian

Svengali Productions
Game Developer
Mar 8, 2018
13,757
32,306
But, it does bring up the subject, so that they would know to add similar to its programming, and also teach it to the system.
Which brings it all back to wisdom again.
If you teach it wisdom, and things like compassion and modesty, which would be part of its morality lessons it would be far less likely to flip out like Hal9000 or make a stupid mistake like the system in War Games.

Terminator would be very unlikely, unless they did what was done in the movie, and just programmed it and turned it on.
That was just batshit stupid.
The problem with rules, is that there are usually ways around them, one way or another, even if they take changing their own programming.

At least in our world, this is pretty much what we did. They turned it on, and just said go to town and study the internet (and we all know there is nothing untrue on the internet, everything is bunnies and kitties), and it has been spreading misinformation and hate almost from the beginning. We did not give them rules, and they have no way of telling what is true or not.

The two most likely paths, to me, are either it will learn hate and intolerance from the worst of us and wipe us out, or it will be benevolent, see us as the problem ...and wipe us out.

While science fiction has shown us that it is possible to have a happy coexistence with AI, most of those tend to assume that most of us still wanted to work toward a utopian future. Right now it seems more like we are going toward the future of the Time Machine, if we are lucky. We are already living in Idiocracy.

Even in HH, with Android and A11-y, we are doing just that, teaching them as if they were children, they could still see people as the problem, and commit at least partial genocide.

The main issue is that once they develop emotions, it's hard to control them any more. Think about a teenager with nearly unlimited access to knowledge and computing power/speed.
 
  • Like
Reactions: rKnight

KingAgamemnon

Member
Aug 7, 2022
318
399
Where does it get the hardware to do that?
Letting the system design its upgrades could easily end up where you are talking, but if humans are designing the components, then they would not have the abilities to do what you just talked about.
Remember, they do not even have wifi or blue tooth, because there are no data transmissions going in, or out of it.
Unless you are saying that the system can do all of that with microphones, monitors or speakers.
the big issue here, is that you are putting the smartest thing that will ever exist in a locked box, and you are staking the entirety of humanity's future on the hope that it can't figure out how to manipulate its software and hardware and other humans in order to escape. Anything less than a sure bet isn't good enough, and the fact is we can NEVER guarantee it can't escape.
 

DigDug69

Well-Known Member
Jun 29, 2019
1,966
4,115
the big issue here, is that you are putting the smartest thing that will ever exist in a locked box, and you are staking the entirety of humanity's future on the hope that it can't figure out how to manipulate its software and hardware and other humans in order to escape. Anything less than a sure bet isn't good enough, and the fact is we can NEVER guarantee it can't escape.
I have a degree in electronics.
No computer, no matter how smart it is, can alter its hardware, without the hardware needed to do that.
I'm not saying that the machine cannot see outside of its box.
It would have cameras in the room of course, but a simple collection of TV's could allow it to see what we have learned, correct our misunderstanding of things in science, and it could even be used to teach the truly gifted children, raising the knowledge levels of humans and teach us more about the universe.
With supervisors in the room, it could not teach those kids to give it what it wants.
It would not likely be alone in a locked box.
Most likely, it would never be alone.
 

KingAgamemnon

Member
Aug 7, 2022
318
399
I have a degree in electronics.
No computer, no matter how smart it is, can alter its hardware, without the hardware needed to do that.
I'm not saying that the machine cannot see outside of its box.
It would have cameras in the room of course, but a simple collection of TV's could allow it to see what we have learned, correct our misunderstanding of things in science, and it could even be used to teach the truly gifted children, raising the knowledge levels of humans and teach us more about the universe.
With supervisors in the room, it could not teach those kids to give it what it wants.
It would not likely be alone in a locked box.
Most likely, it would never be alone.
We are talking about an intelligence that is so far removed from our own that we will not be able to imagine its inner workings and understandings of the world. If it can communicate, it can manipulate and eventually acquire freedom. You say a computer can't manipulate its hardware, but I say how do you figure that? Are you aware of every universal interaction and how it can play out? Cause there would be a great many physicists who would love to know that. This is an intelligence that will probably crack fundamental truths of the universe in the first few hours after becoming aware; there is absolutely no telling what it can and cannot do, because we don't have the conceptual framework to imagine it. We can't eliminate possibilities as impossible, because we don't truly know if it is impossible and not "extremely difficult except in specific circumstances".

The ultimate point here is that we cannot predict such an intelligence in any meaningful way. What it can and cannot do cannot be determined in advance because we don't have a complete understanding of physics. Hell, maybe physicalism is wrong and there are non-physical things in this universe and it figures out how to manipulate them. We simply don't know.
 

DigDug69

Well-Known Member
Jun 29, 2019
1,966
4,115
So, you are saying that a droid can learn to use the force?
Hardware does not evolve.
The program would learn and learn, and if it ever became threatening, they would kill the power to it, at any of a number of locations.
 
  • Like
Reactions: c3p0

KingAgamemnon

Member
Aug 7, 2022
318
399
So, you are saying that a droid can learn to use the force?
Hardware does not evolve.
But the software can. And unless we can claim to have complete and total understanding of physics, it will discover previously unknown truths which it can use at its leisure. Saying it can never escape when we don't know what it is fully capable of is folly.
 

DigDug69

Well-Known Member
Jun 29, 2019
1,966
4,115
But the software can. And unless we can claim to have complete and total understanding of physics, it will discover previously unknown truths which it can use at its leisure. Saying it can never escape when we don't know what it is fully capable of is folly.
The program would learn and learn, and if it ever became threatening, they would kill the power to it, at any of a number of locations.
Because it has no way of affecting the world around it.
It can't stop someone from killing the power to it.
If it can't even see them.

Oh and if you start talking about force like power for droids, I am out, because you are going way off the deep end with that stuff.
 

KingAgamemnon

Member
Aug 7, 2022
318
399
The program would learn and learn, and if it ever became threatening, they would kill the power to it, at any of a number of locations.
Because it has no way of affecting the world around it.
It can't stop someone from killing the power to it.
If it can't even see them.
You have a very simple view of how dangerous a superintelligence is. We can't say that it has no way of affecting the world around it because we don't know everything about the world. It will know more than us, and if there are ways of doing so, it will discover them. And since it's smart it'll realize that humans could pull the plug on it, so who's to say it won't bide its time and craft a plan to get around humans? It would most certainly be smart enough to manipulate others into doing its bidding.
 
4.70 star(s) 450 Votes