19
Memorandum of Transmittal To: Prof. Li Hui Chair of the special committee on Automatic and Robotics. From: Rashmika De Alwis Research assistant Date: 17 August 201614 September 2015 Subject: Should Humans Fear Robots? __________________________________________________________________________________ As required by your memo of .18/8/2015, an investigation was carried out to assess the advantages and disadvantages of the development of fully autonomous weapons or “killer robots” to be submitted at IACSIT (the International Association of Computer Science and Information Technology) 2016 general meeting. The report is based on research articles sourced from international peer reviewed journals and on information disseminated through DAPRA, the US agency that oversees many development programs. This report also highlights possible relevant scenarios within the Science Fiction field as per the speculative nature of this field of research. In addition, the report also highlights the lapses in current international laws with regard to the future of the developments of the field in relation to human security. This segment is discussed under recommendations. Please do not hesitate to refer back to me if there are concerns or any queries on this report.

Memorandum of Transmittal Torash7wordpress.azurewebsites.net/.../2016/08/Report... · Computer Science and Information Technology) ... The report is based on research articles sourced

Embed Size (px)

Citation preview

Memorandum of Transmittal

To: Prof. Li Hui

Chair of the special committee on Automatic and Robotics.

From: Rashmika De Alwis

Research assistant

Date: 17 August 201614 September 2015

Subject: Should Humans Fear Robots?

__________________________________________________________________________________

As required by your memo of .18/8/2015, an investigation was carried out to assess

the advantages and disadvantages of the development of fully autonomous weapons

or “killer robots” to be submitted at IACSIT (the International Association of

Computer Science and Information Technology) 2016 general meeting.

The report is based on research articles sourced from international peer reviewed

journals and on information disseminated through DAPRA, the US agency that

oversees many development programs.

This report also highlights possible relevant scenarios within the Science Fiction field

as per the speculative nature of this field of research. In addition, the report also

highlights the lapses in current international laws with regard to the future of the

developments of the field in relation to human security. This segment is discussed

under recommendations.

Please do not hesitate to refer back to me if there are concerns or any queries on this

report.

Should Humans Fear Robots?

A report prepared by Automatic and Robotics. For the International Association of

Computer Science and Information Technology

Prepared by: Rashmika De Alwis

Reviewed by: Anne Lochead

Tutor: Anne Lochead

Paper: 159:177

Date: 12/09/2015

Abstract

The current trends of AI development particularly by military super powers and their

aspirations are giving rise to concerns on these developments by eminent scientists

such as Steven Hawking. These concerns are particularly heightened by the absence

of international laws such as the Geneva Convention that does not provide for

safeguards to protect human civilization. Nor have these developments considered

ethics and human rights as is practiced by other scientific fields such as medicine.

This paper highlights the drawbacks as well as the advantages at present in

developing AI.

i

Table of contents

1. Introduction………………………………………………………………..1

1.1. Background and Research Method…………………………………………..1

1.2. Assumptions and Limitations………………………………………………….1

2. Discussion and Findings……………………………………………….2

2.1. The present and future research………………………………………………2

2.1.1. Present Works of AI and robots……………………………………...2

2.1.2. 2.1.2. Current research in the creation of Super AI (SAI) …..3

2.2. Current direction and development of AI………………………………….4

2.3. Benefits too humans by developing AI………………………………………4

2.4. Disadvantages and drawbacks of military developing AI…………….4

2.4.1. Risks posed by military robots…………………………………………4

2.4.2. Sci-Fi scenarios………………………………………………………………5

3. Conclusions…………………………………………………………………6

4. Recommendations……………………………………………………….7

4.1. Updating the Law………………………………………………………..…………7

4.2. Safeguards…………………………………………………………………………….7

5. References …………………………………………………………………8

6. Glossary ……………………………………………………………………11

ii

Rashmika De Alwis ID#12118201

SHOULD HUMANS FEAR ROBOTS?

pg. 1

1. Introduction:

1.1. Background and Research Method.

Military super power nations are using the same excuses that were used for the

development of nuclear weapons. Fully autonomous nuclear powered killer robots

can't be contained in bunkers; if they get it wrong it will be one of the greatest

mistake humankind will make and will cost humanity heavily.

This report contains up-to-date, current developments and the path taken in the

development of AI (artificial intelligence) /SAI (super artificial intelligence)

Robots.

The research sources for this report were mainly Massey University NZ library

Databases that has access to all the latest peer reviewed journals and articles.

The recommendations provided in this report are to assist in finding ways to

assure that Killer Robots – do not threaten the human race.

1.2. Assumptions and Limitations.

An Assumption of this report is that development of AI/SAI will continue

exponentially as stated by mathematicians and futurists such as Ray Kurzweil,

John von Neumann, R. Thornton, Eliezer Yudkowsky and some Si-Fi films that

are highly accurate in their predictions.

A limitation of the report is that most of the work done on the SAI is highly secret

government work which cannot be known until a sudden break into the public

arena.

As John von Neumann (1958) says,

“Let an ultra-intelligent machine be defined as a machine that can far surpass all

the intellectual activities of any man however clever. Since the design of machines

is one of these intellectual activities, an ultra-intelligent machine could design

even better machines; there would then unquestionably be an ‘intelligence

explosion’, and the intelligence of man would be left far behind. Thus the first

ultra-intelligent machine is the last invention that man need ever make.”

This is known today as Technological singularity.

Commented [A1]: If you use double quote marks for the quote, use single quote marks for embedded quotes.

Rashmika De Alwis ID#12118201

SHOULD HUMANS FEAR ROBOTS?

pg. 2

2. Discussion and Findings

2.1. The present and future research

2.1.1. Present Works of AI and robots

These are some of the main functions of AIs today. Its dominating of research is

also the main reason leading to Steven Hawkins and Elon Musk’s fears.

The oldest and most productive is Watson, a ground AI with minimal servers. It

can be called a SAI with restrains such as access to memory and processing power

forcefully limited but online. It has enabled’ people develop hard- to- build

applications with its limited capacities and it is still able to rewrite programs

(Ericfein, 2014).

Distributed Agile Submarine Hunting (DASH) is a US government sponsored

program with development phases still fully functioning. Autonomous AI systems

of boats like mini ships will hunt for submarines and will take turns to track

submarines (Sullivan, S. 2015).

X-47B is part of long running series of x planes. What makes this different is that

its nearly autonomous machines are not yet fully intelligent but more than capable

of self-refuelling, self- flying and picking targets provided through data bases on

the ground. However, robots will not check for accuracy and will launch their

missiles and kill anyone once they find their target even though the information

could be false or incorrect. (Dillow, C, 2013)All x planes are development test

planes; once the test phase is over the plane retires and improved versions are

introduced to the US armed forces. .

These autonomous systems with a SAI are made on ground. Should a

malfunctioning program or a hack take place, they would make Terminator

movies a reality.

Most current robots can run with an algorithm that will allow them evolutionary

behaviours such as changes to environment.

This algorithm can be used in the future with highly capable robots which is the

time the threat will emerge (Adami C, 2015).

Commented [A2]: ‘Employment’ would be a better word here.

Commented [A3]: Remove this paragraph break because you are still talking about x planes.

Commented [A4]: Remove this one, too.

Commented [A5]: I’m not sure what this means, but I’ll trust that you do.

Commented [A6]: Remove break – you continue talking about algorithms.

Rashmika De Alwis ID#12118201

SHOULD HUMANS FEAR ROBOTS?

pg. 3

Currently large funds for research on robotics and SAI/AI come from government

and military forces.

2.1.2. Current research in the creation of Super AI (SAI)

An overview on some of the most ambitious research and projects underway to

make the creation of Super AI (SAI) a possibility include the brain initiative

that was initiated by President Obama himself and later allocated to DARPA

(Defence Advanced Research Projects Agency). The brain initiative includes

digitizing the human brain (Collins & Prabhakar, 2013).

The Brain initiative can also find cures for Parkinson’s, brain related injuries and

reduce language barriers in how computers interface with human thought.

Viewing how DARPA is going forward with building SAI systems, I found that they

are very excited and will continue for years rather than give up till they find a

system that will think for itself, search anything and everything around the world

from all databases and the internet to learn for itself to produce results for them

(DARPA, 2015).

Government sources briefly describe what Probabilistic programming for

advancing machine learning (PPAML )stands for; to increase the speed and lessen

the cost of developing AI for many new and different applications that are

impossible to conceive using today’s technology (Jegannathan, 2015).

This work will set the ground for not just efficient AI development but also give

access to hackers and those who wish to harm others.

This can be considered one of the most irresponsible works done by the US

government, as this is like educating the world on how to make nuclear bombs

child's play or giving open access to CDC Top Secret levels that harbour the most

Dangerous Biology known to man.

One of the spin offs from the brain initiative program with the most potential but

which is also disturbing is Systems of Neuromorphic Adaptive Plastic Scalable

Electronics (SyNAPSE), to develop extremely tiny computers that can still carry

extremely large volumes of data and also make large processes using very tiny

amounts of power.

Commented [A7]: Is this a quote? Why is it indented?

Commented [A8]: Should this be ‘when’?

Commented [A9]: DARPA seems to be very excited as it goes forward with building SAI systems, suggesting they will continue for years rather than give up until they find a system that will think for itself and search all databases and the internet to learn for itself to produce results.

Commented [A10]: What do the letters stand for?

Commented [A11]: No paragraph break.

Commented [A12]: No paragraph break.

Commented [A13]: No paragraph break.

Rashmika De Alwis ID#12118201

SHOULD HUMANS FEAR ROBOTS?

pg. 4

By this program they hope to create Nanos similar to human Nanoscale structure

like brain cell neurons system computers (Pratt, 2015).

2.2. Current direction and development of AI

Currently the predominant direction in AI development is in the military and

government law enforcement (Russel, Hauert, Altman, & Veloso, 2015). Killer AI

Robots enables government loopholes to bypass human rights laws, international

laws and the constitutional laws, permitting anything being done by the workings

of AI (Dillow, 2013). Proliferation is also a real threat as not all humans are ethical

or think twice about implications of releasing such technology.

2.3. Benefits to humans by developing AI

Benefits from the Brain initiative include finding cures for Parkinson’s, brain

related injuries and reduce language barriers in how computers interface with

human thought (DARPA, 2014).

All beneficial AI related Programs are in the medical field such as assistance in

hospitals to monitor and administer proper medication dosage (Liao, 2015),

administering correct amounts of radiation during scans, assisting in extremely

delicate and hard surgeries (Guardian, 2015) and finding drugs to cure cancer

(Cohen, Dr P 2014). These robotic AI are limited to one type of work, grounded,

isolated from networks and this method is more beneficial in the long run.

2.4. Disadvantages and drawbacks of military developing

AI.

2.4.1. Risks posed by military robots

As the military will be developing its AI units with SAI in command that include

large swarms or numbers of robots, there are several worst case scenarios that

can take place when such systems go online.

Firstly, ease of hacking by outside forces and to send AI armies to destroy their

enemies’ cities. Secondly, involving the SAI stumbling into one of many top

secret documents incorrectly documented or marketed as a real world scenario.

The SAI should see this as a threat to its existence and take action against it

resulting in another Terminator judgement day scenario.

Commented [A14]: I’m not sure what this means. Could you just say: ‘The government uses loopholes to bypass…’

Rashmika De Alwis ID#12118201

SHOULD HUMANS FEAR ROBOTS?

pg. 5

2.4.2. Sci-Fi scenarios envisioned

Digitising the human brain will allow future AI to analyse the human brain far

better and allow it to redesign itself with more intelligence to create consciences

that involve fear and anger, emotions that took humans millenniums to master.

Such growth will allow it to self-evolve and go online, The AI once it has self-

realized, will find that human logic and ethics are flawed and try to correct it by

force which then will be met by force creating chaos.

Other Sci-Fi scenarios include Doctor Who regarding Cyber Men machines

taking over human bodies. The SyNAPSE project will make that child's play for

any SAI that’s under development. Once SyNAPSE objects become a fashion

most humans will take to them for their promises of longevity and super powers

as the ability to link to the internet without sitting in front of a screen will

enable streaming to the brain by wireless medium.

When the SAI sees that a majority of the population is fully integrated with

SyNAPSE components, it will initiate a full takeover of those human bodies and

make obedient slaves out of them that will probably work better than some

syntactic robots a SAI can mass produce to do its HQ work.

Commented [A15]: No paragraph break.

Commented [A16]: This is plural because ‘objects’ is plural.

Rashmika De Alwis ID#12118201

SHOULD HUMANS FEAR ROBOTS?

pg. 6

3. Conclusions

Yes, we should fear SAI robots as thrill seeking scientists and ambitious politicians

are taking a suicidal path.

The thrill of scientists in the field of technology is partly to blame for the rise

of fears of killer robots and SAI.

Medical and chemical scientists go through strict training, education and take

an oath that when broken can be held accountable under law.

This should be same for teams of scientists pursuing weapons of mass

destruction, chemical or nuclear No such compulsory education or oath exist

in the technological science front.

If one says that AI flawed today and their logic might threaten them that

might be because human ethics is flawed i.e. the creators of the AI.

AI or SAI that's designed by humans will always be flawed as the

programming and the specifications designed into the AI will be by a group of

people with flawed logic. If humans are not to fail miserably when SAI goes

online all humans on earth would need to go through an overnight ethics

revolution. Unfortunately such things take centuries.

“Once the machine thinking method has started, it would not take long to outstrip

our feeble powers. … At some stage therefore we should have to expect the machines

to take control, in the way that is mentioned in Samuel Butler’s Erewhon” (Alan

Turing).

Commented [A17]: No break here.

Commented [A18]: This is really a recommendation.

Commented [A19]: This could be stated more succinctly: Human flaws in logic and ethics are replicated in Ai or SAI programming and specifications.

Rashmika De Alwis ID#12118201

SHOULD HUMANS FEAR ROBOTS?

pg. 7

4. Recommendations

4.1. Updating the Law

There is clear urgent need to update all international laws and the Geneva

Convention laws in military conflict. AI research breaks all kinds of laws, human

rights and constitutional rights (Paulius, 2015). AI actions need to be held

accountable humans who create and order them. Therefore, present laws, where

most governments and organisations by- pass loopholes and stay unchallenged,

need to change (Balkin, 2015). These laws should clearly differentiate between all

developments of AI/SAI autonomous AI and network link AI working with other

AI.

4.2. Safeguards

It is extremely important that all SAI are developed and are answerable to not one

human but a group of highly professional people from all backgrounds. This way

no decisions can be carried out without unanimous agreement; if one says no on a

vote for action to be carried out by SAI, the SAI will not be able to carry out that

order.

Restrict server accesses are needed for higher processing power required for

achieving consciousness.

Asimov’s Laws of Robotics (Glossary 1) must be mandatory

These laws are written into the base code also known as the Asimov block.

(Word count: 1865) 1650

Rashmika De Alwis ID#12118201

SHOULD HUMANS FEAR ROBOTS?

pg. 8

5. References.

Adami, C (2015). Artificial intelligence: robots with instincts. Nature, 426(7),

521(7553).

Al Jazeera (Producer) (2011). [Robot online YouTube]. Military robot technology-full

length++ HD [Motion Picture].

Balkin, J. M. (2015). The path of robotics law. California Law review. Yale Law

School. Retrieved August 5, 2915, from http://ssrn.com/abstract=2586570

Baum, S. (2015). FLI artificial superintelligence project. Retrieved 2015, from

http://gcrinstitute.org/fli-artificial-superintelligence-project/

Clarke, R. (1994). Roger Clarke's "Asimov"s Law of Robotics. Retrieved 2015, from

Roger Clarke's Web-Site: www.rogerclarke.com.au/SOS/Asimov.html

Cohen, Dr P (2014) Big Mechanism, Defence Advanced Research Projects Agency.

Collins, F., & Prabhakar, A. (2013, April 2). The White House- President Barack

Obama, Retrieved from Brain Initiative challenges: researchers to unlock mysteries

of human mind.

DARPA (2014). Darpa and the brain initiative. Defence Advanced Research Projects

Agency.

DARPA (Producer2015). Probabilistic programming for advancing machine learning

[Motion Picture].

Deng, B. (2015). The robot's dilemma: working out how to build ethical robots in one

of the thorniest challenges in artificial intelligence. Nature, 24(26), 523(7558).

Dillow, C. (2013, July). What the X-47B reveals about the future of autonomous

flight:five things you need to know about the X-47B, the US military's first

Rashmika De Alwis ID#12118201

SHOULD HUMANS FEAR ROBOTS?

pg. 9

unmanned, autonomous combat jet. Popular Science. Retrieved 2015, from

http://www.popsci.com/technology/article/2013-05/five-things-you-need-

know-about-x-47b-and-coming-era-autonomous-flight

Ericfein (2014). Watson, the most powerful artificial intelligence goes public.

Wondergressive. Retrieved from http://wondergressive.com/watson-ai-goes-public/

Future of Life (n.d.). Autonomous weapons: an open letter for AI & robotics

researchers.

Garcia, D. (2015). Killer robots: why the US should lead the ban. Global Policy, 6, 57-

63.

Guardian. (2015). Google teams up with health firm to develop AI surgical robots.

Operating Theatre Journal, 12, 295.

Jegannathan, D. S. (2015). Probabilistic programming for advancing machine

learning (PPAML). Retrieved August 12 2015, from

http://www.darpa.mil/program/probabilistic-programming-for-advancing-

machine-Learning

Liao, P.-H. (2015). Applying artificial intelligence technology to support decision

making in nursing: a case study in Taiwan. Health Informatics Journal, 21(2), 137-

148.

Matt. (2015). Robots and AI. The Robotics Alliance Project. Retrieved from

http://robotics.nasa.gov/students/ai_robotics.php

Mittu, R., Taylor, G., Sofge, D., Lawless, W., & Chair, P. (n.d.). Foundations of

autonomy and its (cyber) threats: from individuals to interdependence. 2015 AAAI

Spring Symposium. California: AAAI Press.

NASA (ND), National Aeronautics and Space Administration. Retrieved from

Radioisotopes Power Systems: Radio Isotope Thermoelectric Generator (RTG):solar

system.nasa.gov/rps/rtg/.cfm#mhw

Rashmika De Alwis ID#12118201

SHOULD HUMANS FEAR ROBOTS?

pg. 10

Paulius, A. (2015). Liability for damages caused by artificial intelligence. Computer

Law, 31(3), 376-389.

Pratt, Dr AG (2015). Systems of Neuromorphic adaptive plastic scalable electronics

(SyNAPSE). Retrieved 2015, from http://www.darpa.mil/program/systems-of-

neuromorphic-adaptive-plastic-scalable-electronics

Pratt, Dr GA (n.d.). Autonomous Robotic Manipulation (ARM). Defence Advanced

Research Projects Agency. Retrieved August 2015, from

http://www.darpa.mil/program/autonomous-robotic-manipulation

Russel, S; Hauert, S; Altman, R & Veloso, M (2015). Robotics: ethics of artificial

intelligence. Nature.

Socrates. (2012, April). 17 Definitions of the Technological Singularity. Retrieved

from Singularity: www.singularityweblog.com/17-definitions-of-the-technological-

singularity/

Sullivan, S. (2015). Distributing agile submarine hunting (DASH). Retrieved from

http://www.darpa.mil/program/distributed-agile-submarine-hunting

Walz, J A (Producer), & Ealer, J (Director) (2014). US Military Top Secret Killer

Terminator Robots! Future Army Technology@DocumentaryVidz [Motion Picture]

History Channel.

Rashmika De Alwis ID#12118201

SHOULD HUMANS FEAR ROBOTS?

pg. 11

6. Glossary

6.1. Asimov’s Laws of Robotics

Minor AI as the basics that will hold back a robotic enslavement of humans.

The Meta-Law

A robot may not act unless its actions are subject to the Laws of Robotics

Law Zero

A robot may not injure humanity, or, through inaction, allow humanity to come to

harm

Law One

A robot may not injure a human being, or, through inaction, allow a human being

to come to harm, unless this would violate a higher-order Law

Law Two

A robot must obey orders given it by human beings, except where such orders

would conflict with a higher-order Law

A robot must obey orders given it by superordinate robots, except where such

orders would conflict with a higher-order Law

Law Three

A robot must protect the existence of a superordinate robot as long as such

protection does not conflict with a higher-order Law

A robot must protect its own existence as long as such protection does not

conflict with a higher-order Law

Law Four

A robot must perform the duties for which it has been programmed, except where

that would conflict with a higher-order law

The Procreation Law

A robot may not take any part in the design or manufacture of a robot unless the

new robot's actions are subject to the Laws of Robotics

(Asimov. I. 1990)

Commented [A20]: This is an Appendix, not a Glossary.