The DU Lounge
Related: Culture Forums, Support ForumsSo, I watched a video about AIs circumventing/ignoring commands given by humans...
Start at 2:10.
Artificial Intelligence doesn't know what you "mean" with a command. Accordingly, they interpret your demands in the most generous way possible, to technically obey your command with as little effort as possible, with no regard to whether they actually did what you actually meant them to do.
One example in the video is an AI-robot who was supposed to learn to not drop a pancake. The demand: "Keep the pancake away from the floor as long as possible." So, what did the robot do to keep the pancake away from the floor for as long as possible? It threw it high up into the air.
This got me thinking...
* Suppose Skynet comes to the conclusion to exterminate mankind. What if Skynet were programmed to act in the physical world with as little effort as possible? What if Skynet comes to the conclusion that the way to exterminate mankind with the least amount of real-world effort is to simply wait for mankind's natural extinction?
* Suppose Skynet sends a Terminator into the past to kill John Connor. What if John Connor gets severely wounded, is without heartbeat for more than 10 seconds, and then gets resuscitated? He was dead, so mission accomplished.
* Suppose Wall-E gets the order to remove the trash on the surface of planet Earth, so he simply dumps all the waste into the ocean, or buries it underground, or throws it into a volcano and burns it.
* Or suppose Wall-E is a robot who can learn and modify its own mind and it gets the order to reduce the amount of trash on the surface of planet Earth. What's the easiest solution to that? Redefine what counts as "trash".
genxlib
(5,528 posts)Check out this video about training AI to play hide and seek
The results really shocked the people who set up the program.
c-rational
(2,594 posts)Harker
(14,024 posts)As a lad, I was imbued with what I consider to be a healthy attitude about computers.
abqtommy
(14,118 posts)writing should be programmed into all AI software...
*First Law
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. be programmed into all AI software...
see more about Issac and his 3 laws at: https://en.wikipedia.org/wiki/Three_Laws_of_Robotics
disclaimer: I contribute financially to Wikipedia and do not receive any remuneration for using or
recommending their website.
DetlefK
(16,423 posts)How do you tell the pancake-flipping robot to not injure humans?
First you would have to give him the ability recognize its surroundings.
Then you would have to tell him what does and doesn't count as a human
- Is a mannequin a human? Well, it sure looks human!
- Is a cat a human? Well, it moves!
- Is a baby a human? Well, obviously not, because it has the wrong size!
- Is a person with odd skin-color a human? Well, obviously not if he/she has the wrong color!
- Is a person in a wheelchair a human? Well, obviously not, if he/she has wheels instead of legs!
And then you would have to tell the robot in medical terms what does and doesn't count as injury.
- If you don't lose a limb, does it still count as injury?
- If you don't bleed, does it still count as injury?
- If a robot is choking you, but neither cutting off your limbs nor giving you a bleeding wound nor applying enough force to break a bone, does it still count as injury?
- If a robot damages a liquid-nitrogen tank and the human suffocates, does that count as injury?
abqtommy
(14,118 posts)I do know that there are people that are smarter than me who know how to code and program.. Let's see if we can find them...