FREEDOM AND SAFETY

 

Recently, we might often have heard of the term “technological singularity” with the hypothesis that accelerating progress in technological inventions will cause a runaway effect that will make ordinary humans someday be overtaken by artificial intelligence.

 

The term seems to be appeared very contemporary to this technology era but in fact, thought about singularity has a long philosophical history.

 

In 1958, Stanish Ulam, a Polish American scientist in the fields of mathematics and nuclear physics, first used the term “singularity” in a conversation with John von Neumann, Hungarian-American mathematician, physicist, computer scientist, and polymath, about the technological progress.

 

“One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue”- Stanish Ulam

 

In 1965, I.J.Good, a British mathematician and cryptologist, describe the first time the notion of “intelligence explosion”. And he speculated that it would be the unquestionable outcome when artificial general intelligence (AGI) greatly surpassed capability of any human.

 

“Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.”  -  I.J.Good

 

In 1981, Stanislaw Lem, a Polish writer of science fiction, philosophy, and statire, and a trained physician, published his science fiction novel “Golem XIV”, described the acceleration of AI  -  Golem XIV, which is a military AI computer created to aid its builders in fighting wars, moving towards personal technological singularity.

 

In 1983, Vernor Steffen Vinge, an American science fiction author, had the “First Word” article on the singularity on Omni magazine, using the term “singularity” in the way that specifically tied to the production of computer intelligence.

 

“First Word” article by Vernor Vinge  - Courtesy of Josh Calder from FutureAtlas.com

 

“We will soon create intelligences greater than our own.

 When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding. This singularity, I believe, already haunts a number of science-fiction writers. It makes realistic extrapolation to an interstellar future impossible. To write a story set more than a century hence, one needs a nuclear war in between … so that the world remains intelligible”  -  Vernor Vinge

 
In 1984, Samuel R.Delany, an American author, professor, and literary critic, used the “cultural fugue” as a plot device in his science fiction novel “Stars in my pocket like grains of sand”: the terminal runaway of technological and cultural complexity destroys all life on any world on which it transpires.
 

In 1985, Ray Solomonoff, an artificial intelligence researcher, sent out the notion of “speed explosion” and anylized a likely evolution of AI, giving a formula predict when it reach to the “infinity point” in his article “The Time Scale of Artificial Intelligence”. Eliezer Yudkowsky gave a succinct version of the argument in his 1996 article “Staring at the Singularity”:

“Computing speed doubles every two subjective years of work. Two years after Artificial Intelligences reach human equivalence, their speed doubles. One year later, their speed doubles again. Six months  -  three months -  1.5 months … Singularity”  -  succinct version by Eliezer Yudkowsky about Ray Solomonoff’s argument.

 

In 1993, on another article “The Coming Technological Singularity: How to surivive in the Post-Human Era”, Vinge argued that science fiction authors science-fiction authors could not write realistic post-singularity characters who surpassed the human intellect, as the thoughts of such an intellect would be beyond the ability of humans to express.

 

“Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.”  - Vernon Vinge

 

In 2000, Bill Joy, a prominent American computer scientist, also a co-founder of Sun Microsystems, wrote on an article for Wired magazine about the dangers of the singularity on 21st century.

 

“Our most powerful 21st-century technologies  -  robotics, genetic engineering, and nanotech  - are threatening to make humans an endangered species,”- Bill Joy  -  on the article Why the future doesn’t need us”

 

In 2005, Ray Kurzweil, an American computer scientist and futurist, who has enormous faith in science, expounded the singularity phenomenon on his book “The singularity is near”. The nub of his argument is that technology has been evolving so quickly that in the near future, humans and computers would, in effect, meld to create a hybrid, bio-mechanical life form that would extend our capacities unimaginably. He also suggested that somatic gene therapy will be replacing human DNA with synthesized genes.

 

“By 2020, $1,000 (£581) worth of computer will equal the processing power of the human brain,” he says. “By the late 2020s, we’ll have reverse-engineered human brains.”  -  Ray Kurzweil

 

In 2007, Eliezer Yudkowsky, an American AI researcher and writer best known for the idea of friendly artificial intelligence, defined three logical distinct schools of singularity originally on the Machine Intelligence Research Institute blog’s article. According to his idea, these all three cores are incompatible each other rather than mutually supporting.

 

Also in the same year, the Economic Committee of the US Congress released a report about the future of nanotechnology, predicted that technological changes would create the singularity in the mid-term future.

 

In 2009, the prophet of innovation, Kurzweii and X-Prize founder Peter Diamandis, established the Singularity University with the mission: “to educate, inspire and empower leaders to apply exponential technologies to address humanity’s grand challenges.”

 

In 2016, Former President of the United States Barack Obama shared his concern on the impact of singularity on economic in his interview to Wired magazine:

“One thing that we haven’t talked about too much, and I just want to go back to, is we really have to think through the economic implications. Because most people aren’t spending a lot of time right now worrying about singularity  -  they are worrying about “Well, is my job going to be replaced by a machine?”  -  Barack Obama

https://medium.com/twogap/roadmap-of-technological-singularity-45fcfe3bc718