Waiting for the Great Leap… Forward?

Preview:

DESCRIPTION

Waiting for the Great Leap… Forward?. J. Hughes Institute for Ethics and Emerging Technologies Prepared for the Singularity Summit San Francisco September 2007. AGI is Likely. Sentient, self-willed, greater-than-human machine minds are very likely in the coming fifty years. - PowerPoint PPT Presentation

Citation preview

Waiting for the Great Leap…

Forward?

J. HughesInstitute for Ethics and Emerging Technologies

Prepared for the Singularity SummitSan Francisco

September 2007

AGI is Likely

• Sentient, self-willed, greater-than-human machine minds are very likely in the coming fifty years.

AGI Probably Very Dangerous

• Steps must be taken to ensure its safety.

AGI Will Be Radically Alien• Empathy for human beings is the product, at

least, of embodied mammalian brains.

Attempt FAI

• even though attempt is likely futile

Motivations are Editable

Millennialist Cognitive Biases

• Yes, Apocalypse and the Rapture are both possible

• But we shouldn’t assume either

• We have some ability to determine outcomes

Emergent, Designed, Evolved

• Self-willed minds may evolve from primitive designed AI in infosphere ecosystem

Detecting Dangerous AGI• Connecting S^ with cyber-security initiatives

Most Cybersecurity Ignores AGI

• Most cybersecurity analysis dismisses designed or emergent AGI

Global Tech Regulation• Techs of mass destruction require transnational regulation

AGI Police Infrastructure• Detection and

counter-measures may require machine minds as well.

Human Intelligence Augmentation

• to keep up with AI

CogAug & Uploads as Safe AGI

• Perhaps all AGI should be driven by mammal-origin brains

Structural Unemployment• Need for a new

social contract around social provision, labor, wages, education and retirement

Robot Rights

• Which minds have which rights and responsibilities?

• Engineering slave minds vs.

• Flourishing minds, but within social limits

Licensing Superpowers

• If you need a license to drive a car, why not an AI-powered brain?

• If only governments should have nukes, what about S^AIs?

Regulating Singularity X-Risks

• When do we consider bans?

A Good S^ is Possible

• But we need to work at it

Recommended