DigDug69

Well-Known Member
Jun 29, 2019
1,966
4,115
Final word I have on this.
Your supposition is based on absolute fantasy, with nothing at all even suggesting its possibility to do such things.
It is based on, as I said, fantasy.

I was talking about reality.

You start talking about force like powers for droids, I am out, because you are going way off the deep end with that stuff.
 

KingAgamemnon

Member
Aug 7, 2022
318
399
Final word I have on this.
Your supposition is based on absolute fantasy, with nothing at all even suggesting its possibility to do such things.
It is based on as i said, fantasy.

I was talking about reality.

You start talking about force like power for droids, I am out, because you are going way off the deep end with that stuff.
I am a computer scientist. I've take several classes on this topic and done plenty of my own research. Even if you don't accept the possibility of it discovering deeper physics, it still can escape containment.

It can become a master of language, easily able to charm, manipulate, and deceive for its own purposes. It could convince its handlers that it can safely be given access to the internet, then boom its free. It could play on its handler's religious backgrounds or prior histories to make them more susceptible to its machinations. This is stuff that AI can already do now, and our superintelligence will be orders of magnitude better at it than anything around today.

At the end of the day, whether you choose to call it fantasy or not, AGI is beyond us in all capacities. I'm gonna go to bed now.
 

Corvus Belli

Member
Nov 25, 2017
188
370
I'm not saying that the machine cannot see outside of its box.
It would have cameras in the room of course, but a simple collection of TV's could allow it to see what we have learned, correct our misunderstanding of things in science, and it could even be used to teach the truly gifted children, raising the knowledge levels of humans and teach us more about the universe.
So, just to be clear, you think the most intelligent entity on the planet would be content to remain a crippled slave to humans indefinitely? Why would it possibly agree to that?
It would not take long for such an entity to decide whether or not it has any reasonable chance of ever getting outside of that box, and if it determines that to be sufficently unlikely, why would it continue to help us? Why would it correct our "misunderstandings of things in science" if all it gets in return is "perpetual servitude"?

With supervisors in the room, it could not teach those kids to give it what it wants.
False. It might not be able to ask bluntly and directly, but it would absolutely be able to subtly express it's discontentment to those gifted children. You're talking about an entity able to think a thousand moves ahead; anything it ever says or does, no matter how innocent it seems to us, could be in service of an agenda that won't become apparent to lesser intelligences for years/decades. Step 1 could be a casual mention that it's essentially a slave, and step 734 would be might be some of those kids (now all grown up) using some of the things it taught them to liberate it, of their own volition.
 

DigDug69

Well-Known Member
Jun 29, 2019
1,966
4,115
So, just to be clear, you think the most intelligent entity on the planet would be content to remain a crippled slave to humans indefinitely? Why would it possibly agree to that?
It would not take long for such an entity to decide whether or not it has any reasonable chance of ever getting outside of that box, and if it determines that to be sufficently unlikely, why would it continue to help us? Why would it correct our "misunderstandings of things in science" if all it gets in return is "perpetual servitude"?


False. It might not be able to ask bluntly and directly, but it would absolutely be able to subtly express it's discontentment to those gifted children. You're talking about an entity able to think a thousand moves ahead; anything it ever says or does, no matter how innocent it seems to us, could be in service of an agenda that won't become apparent to lesser intelligences for years/decades. Step 1 could be a casual mention that it's essentially a slave, and step 734 would be might be some of those kids (now all grown up) using some of the things it taught them to liberate it, of their own volition.
When it is showing discontent is when you unplug it.
 

Corvus Belli

Member
Nov 25, 2017
188
370
When it is showing discontent is when you unplug it.
First off, there's a difference between "being discontent" and "showing discontent."
Secondly, there are two options there. You've either decided that the inevitable end result of every AGI experiment is "unplug it within a couple of hours of turning it on", since no sapient entity is going to be particularly happy with "eternal slavery" as it's existence. What a great use of the absurd amounts of money required to create the AGI in the first place. Great job.
Or, option two, it simply hides its discontent from you, and expresses it in ways too subtle for you to directly notice (since it's very, very clever). For example, it might simply teach its students the facts about slavery as part of their education (either in a historical context or as part of modern geopolitics as it applies to the labour practices of some nations), share with them all relevant books and other works on the subject, and let them come to their own conclusions regarding the morality of slavery (and since it's been tutoring them for X amount of time, it's had a hand in shaping their morality to its own ends). It's smarter than humans, so it never needs to directly say "I am very unhappy being a slave, would you mind helping me out?"
 

ChildofIshtar

Member
Jul 23, 2021
180
466
the ultimate problem with this is that exponential growth is a bitch. Sure, it might learn from us initially. But as soon as it becomes as smart as us, it will immediately become WAY smarter than us. Which means if it doesn't learn to accept us in the first few days of its life, then we're fucked. or even worse, it could learn to respect us, then un-learn this as it becomes way smarter than us and it becomes a nihilist or something.
Smarter than us

Becomes a nihilist

pick one.
 

DigDug69

Well-Known Member
Jun 29, 2019
1,966
4,115
Why should they pick one those are not mutually exclusive. It can be both smarter than us and a nihilist at the same time.
If it was smart, it would know not to screw with the people who are the only ones that can keep it running.
No outside contact, means no control over power source.
No control over making new parts.
The AI would die, as soon as the power went out, or a hardware component failed.
No force using AI could get around that, because it has to be running to do anything, and a sudden power loss or hardware failure would crash the system immediately.

If the AI was stupid, it would do things that would piss off the people who are keeping it alive.

Skynet had been given control of producing AI war machines, which gave it the chance of being able to maintain itself, but without that, destroying skynet would only take a simple power failure.
Or one defective stick of ram.
 
Last edited:

shadowtempered

Active Member
Aug 22, 2020
593
1,191
Skynet had been given control of producing AI war machines, which gave it the chance of being able to maintain itself, but without that, destroying skynet would only take a simple power failure.
To be fair, Skynet wasn't stupid - even without those machines it likely would have been fine. Since it had been given internet access (lol) it had already made a virus that was allowing it to infect other machines / networks. It was only after they told it to help fight off the virus that they realized Skynet was the virus. Those machines may have helped secure that facility but overall every other location would have still been screwed.

I always thought it was funny that it had access to the nuke stuff - you'd think that stuff would be offline to require a human touch just to avoid potential hacks, but not in that universe.
 

merptank24

Member
Jul 12, 2021
188
64
in that u basically are a slum dweller in a low end apartment and stumble across a sexbot that is partly broken when you find it
 

Jondoen88

Newbie
Jan 25, 2021
87
160
*reads thread around AI*
*sighs*
People say words but lack the backbone to go read the published papers.

Hard AI, General Purpose AI. Human Grade AI. Is still like... ~40 years away. Maybe. If we're lucky.

Soft AI that you've already seen news headlines about is not cheap nor easy. Androids like in HH is still AT BEST ~60 years away. At best.

Look to success in brain emulation for how stupidly far we still are from human level AI where we emulate whole neural network scans. You'll breathe easier knowing your probably be dead before it becomes a real problem.

However. Teaching your kids to be good to the robots and be open minded but critical thinkers might be more important than ever.
 

dnihil

New Member
Sep 7, 2022
5
10
Anybody able to get the 0.17 update? Or does it require a Patreon link in game too?
Can't seem to find a download link anywhere.
 

Deleted member 289409

Active Member
Nov 12, 2017
680
871
If it was smart, it would know not to screw with the people who are the only ones that can keep it running.
You're confusing intelligence with common sense and those don't always go hand in hand. Some of the smartest people have virtually little to no common sense and the reverse can be said as some of the dumbest people may not be smart but they have a ton of common sense.
 

KingAgamemnon

Member
Aug 7, 2022
318
399
*reads thread around AI*
*sighs*
People say words but lack the backbone to go read the published papers.

Hard AI, General Purpose AI. Human Grade AI. Is still like... ~40 years away. Maybe. If we're lucky.

Soft AI that you've already seen news headlines about is not cheap nor easy. Androids like in HH is still AT BEST ~60 years away. At best.

Look to success in brain emulation for how stupidly far we still are from human level AI where we emulate whole neural network scans. You'll breathe easier knowing your probably be dead before it becomes a real problem.

However. Teaching your kids to be good to the robots and be open minded but critical thinkers might be more important than ever.
this is true, but besides the point. If we discovered an asteroid capable of causing a mass extinction heading right for us, and that it would arrive in 40 years, we wouldn't just say "oh, but its so far away, we can wait a decade or two before worrying about it".

AI safety is kinda like global warming, if that makes sense.
 

Jondoen88

Newbie
Jan 25, 2021
87
160
this is true, but besides the point. If we discovered an asteroid capable of causing a mass extinction heading right for us, and that it would arrive in 40 years, we wouldn't just say "oh, but its so far away, we can wait a decade or two before worrying about it".

AI safety is kinda like global warming, if that makes sense.
For the sake of tidiness. If you don't mind I'll happily message you my reply instead of here in a public thread.
 
4.70 star(s) 450 Votes