singularity/artificial intelligence and human race survival

Dramatic changes in the rate of economic growth have occurred in the past because of some technological advancement. Based on population growth, the economy doubled every 250,000 years from the Paleolithic era until the Neolithic Revolution.
 This new agricultural economy
 began to double every 900 years, a remarkable increase. In the current era, beginning with the 
Industrial Revolution, 
the world’s economic output doubles every fifteen years, sixty times faster than during the agricultural era.
 If the rise of superhuman intelligences causes a similar revolution,
 argues Robin Hanson, one would expect the economy to double at least quarterly and possibly on a weekly basis

 But artificial intelligences may simply eliminate the human race for access to scarce resources, and humans would be powerless to stop them.Superhuman intelligences may have goals inconsistent with human survival and prosperity.

One approach to prevent a negative singularity is an 
AI box, 
whereby the artificial intelligence is kept constrained inside a simulated world and not allowed to affect the external world. Such a box would have extremely proscribed inputs and outputs; maybe only a plaintext channel. However, a sufficient intelligent AI may simply be able to escape from any box we can create

research be undertaken to produce friendly artificial intelligence in order to address the dangers. He noted that if the first real AI was friendly it would have a head start on self-improvement and thus prevent other unfriendly AIs from developing, as well as providing enormous benefits to mankind. The Singularity Institute for Artificial Intelligence is dedicated to this cause.

Implications for human society
In 2009, leading computer scientists, artificial intelligence researchers, and roboticists met at the Asilomar Conference Grounds near Monterey Bay in California to discuss the potential impact of the hypothetical possibility that robots could become self-sufficient and able to make their own decisions. They discussed the extent to which computers and robots might be able to acquire autonomy, and to what degree they could use such abilities to pose threats or hazards. Some machines have acquired various forms of semi-autonomy, including the ability to locate their own power sources and choose targets to attack with weapons. Also, some computer viruses can evade elimination and have achieved "cockroach intelligence.

Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.

 A United States Navy report indicates that, as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions.

The Association for the Advancement of Artificial Intelligence has commissioned a study to examine this issue, pointing to programs like the Language Acquisition Device, which can emulate human interaction.

Isaac Asimov's Three Laws of Robotics
 is one of the earliest examples of proposed safety measures for AI:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with either the First or Second Law.
--------------------------------------------------------------------------
[A PSYCHOTIC HUMAN OR ROBOTIC HACKER CAN CAUSE PROBLEMS TO 'LAWS OF ASIMOV']

Hawkins (1983) writes that "mindsteps", 

dramatic and irreversible changes to paradigms or world views, are accelerating in frequency as quantified in his mindstep equation. He cites the inventions of writing, mathematics, and the computer as examples of such changes.
Ray Kurzweil's analysis of history concludes that technological progress follows a pattern of exponential growth, following what he calls The Law of Accelerating Returns. He generalizes Moore's law, which describes geometric growth in integrated semiconductor complexity, to include technologies from far before the integrated circuit.