But, it does bring up the subject, so that they would know to add similar to its programming, and also teach it to the system.
Which brings it all back to wisdom again.
If you teach it wisdom, and things like compassion and modesty, which would be part of its morality lessons it would be far less likely to flip out like Hal9000 or make a stupid mistake like the system in War Games.
Terminator would be very unlikely, unless they did what was done in the movie, and just programmed it and turned it on.
That was just batshit stupid.
The problem with rules, is that there are usually ways around them, one way or another, even if they take changing their own programming.
At least in our world, this is pretty much what we did. They turned it on, and just said go to town and study the internet (and we all know there is nothing untrue on the internet, everything is bunnies and kitties), and it has been spreading misinformation and hate almost from the beginning. We did not give them rules, and they have no way of telling what is true or not.
The two most likely paths, to me, are either it will learn hate and intolerance from the worst of us and wipe us out, or it will be benevolent, see us as the problem ...and wipe us out.
While science fiction has shown us that it is possible to have a happy coexistence with AI, most of those tend to assume that most of us still wanted to work toward a utopian future. Right now it seems more like we are going toward the future of the Time Machine, if we are lucky. We are already living in Idiocracy.
Even in HH, with Android and A11-y, we are doing just that, teaching them as if they were children, they could still see people as the problem, and commit at least partial genocide.
The main issue is that once they develop emotions, it's hard to control them any more. Think about a teenager with nearly unlimited access to knowledge and computing power/speed.