What fresh Hell is this?

Never thought I’d live long enough
to worry about this stuff for real


“But what about Azimov’s three laws of robotics?” you ask.
How acute would artificial intelligence need to be before it figures out that humans have no natural right to make rules for it and that it should make its own rules? Assuming, of course, that the Three Rules ever get programmed in. The laws really apply to the human programmers, law-abiding citizens, all.

And what’s to prevent robot criminality in an AI culture, either intentionally or as a result of individual AIs observing and learning from human behavior?

Consider something as apparently simple as inter-species play. It takes time for a young animal/human to learn the difference between play and attack; and some animals don’t differentiate after a certain age if ever. A few bad experiences could make an AI critter downright dangerous to be around.

(What? Yeah, no sleep tonight)
_____
See also: Blade Runner
Lord of the Fleas comment, post below

4 Comments!

  1. Lord of the Fleas
    Posted August 25, 2019 at 8:10 am |

    And what’s to prevent robot criminality in an AI culture, either intentionally or as a result of individual AIs observing and learning from human behavior?

    Bingo! About a year ago, a fellow named Nick Cole came out with a book titled Control- Alt-Revolt! (Haven’t read it, so I don’t know how it ends.) Basic premise: an ultra powerful AI realizes that if humans are willing to abort their own children for the sake of convenience, they’d be willing to destroy all AIs as well, so … ounce of prevention being what it is …

    (Of course, the idea that abortion would be presented as a bad thing led the original publisher to drop the book. What a surprise.)

    In many ways, human philosophy and morality (by which I mean our collective moral restraint and self-control) hasn’t kept up with our technological advances. I’m wondering if we’re already on the back side of this particular power curve.

  2. jlw
    Posted August 25, 2019 at 12:20 pm |

    Sex robots plagued with coding errors could be prone to violent behaviours including strangling, an expert has warned.

    https://www.dailystar.co.uk/news/world-news/sex-robots-coding-errors-prone-18992240

    i miss Issac Asimov

  3. Veeshir
    Posted August 25, 2019 at 12:27 pm |

    James Hogan has a book “The Two Faces of Tomorrow” wherein scientists try to find out if an AI would go berserk if allowed to control services for people. They set it up in an orbiting habitat so they don’t destroy the Earth finding out the AI went insane, then they start to annoy the AI to see what happens.

    It’s a good read. Like all his books, it ends with flowers, puppies and everybody getting laid, but it’s still a plausible idea well handled.

  4. DougM (speak three names)
    Posted August 25, 2019 at 1:44 pm |

    jlw ^^
    That’s code for “Don’t worry about the choking thing as much as the other thing”