“(AI) is intelligence exhibited by machines. In computer science, the field of AI research defines itself as the study of “intelligent agents“: any device that perceives its environment and takes actions that maximize its chance of success at some goal. Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving”. Wikipedia
If you have been on Earth for more than 15 years, you have borne witness to the amazing advances in technology at an ever-increasing rate. In fact, based on intuitive linear, if you were born in the year 2000, computers have doubled their capabilities approximately 16 times since.
However, that basis does not incorporate the reality of current growth as defined by Futurist Ray Kurzweil’s Law of Accelerating returns:
“An analysis of the history of technology shows that technological change is exponential, contrary to the common-sense “intuitive linear” view. So, we won’t experience 100 years of progress in the 21st century — it will be more like 20,000 years of progress (at today’s rate). The “returns,” such as chip speed and cost-effectiveness, also increase exponentially. There’s even exponential growth in the rate of exponential growth.
Within a few decades, machine intelligence will surpass human intelligence, leading to The Singularity — technological change so rapid and profound it represents a rupture in the fabric of human history. The implications include the merger of biological and nonbiological intelligence, immortal software-based humans, and ultra-high levels of intelligence that expand outward in the universe at the speed of light.”
“A specific paradigm (a method or approach to solving a problem, e.g., shrinking transistors on an integrated circuit as an approach to making more powerful computers) provides exponential growth until the method exhausts its potential. When this happens, a paradigm shift (i.e., a fundamental change in the approach) occurs, which enables exponential growth to continue.”
Ray Kurzweil, March 7, 2001.
The technological singularity (also, simply, the singularity) is the hypothesis that the invention of artificial superintelligence will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization.” Wikipedia
The words “unfathomable changes” in the above definition should be sufficient cause for reasonable pause and consideration. For clarification, all of us at TheStartup.com believe in technology and the advancements, convenience and life prolonging/sustaining benefits accruing to humanity as a result.
Nevertheless, given the immutable laws of causality, and the double-edged sword of inevitability, now might be a very good time for introspection and collective deliberation.
Among the elites in the field of A.I. there are ongoing discussions and “thought-casting” pertaining to the merging of biological and machine intelligence (the Singularity), wherein a “neural lace” (injectable mesh) would provide a direct interface between the human brain and machine/computer intelligence and data access. The possibilities of this new “altered reality” are equaled perhaps only by the potential consequences.
Some of the most brilliant minds of our time share the same concerns:
“I think A.I. is probably the single biggest item in the near term that’s likely to affect humanity. So, it’s very important that we have the advent of A.I. in a good way, that it’s something that if you could look into a crystal ball and see the future, you would like that outcome. Because it is something that could go wrong… So, we really need to make sure it goes right.”
“If we can effectively merge with A.I. by improving the neural link between the cortex and your digital extension of yourself – which already exists, it just has a bandwidth issue – then effectively you become an A.I. human symbiote. And if that then is widespread, [where] anyone who wants it can have it, then we solve the control problem as well. We don’t have to worry about some evil dictator A.I., because we are the A.I. collectively. That seems to be the best outcome I can think of.”
“I am in the camp that is concerned about super intelligence,” First, the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that, though, the intelligence is strong enough to be a concern.”
“I agree with Elon Musk and some others on this and don’t understand why some people are not concerned,”
“The development of full artificial intelligence could spell the end of the human race.”
“It would take off on its own, and re-design itself at an ever-increasing rate”
“Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”
“warned that the creation of powerful artificial intelligence will be “either the best, or the worst thing, ever to happen to humanity”, and praised the creation of an academic institute dedicated to researching the future of intelligence as “crucial to the future of our civilization and our species”
In the pursuit of balance and fair play, here are more quotes from some of the brightest minds on the subject of AI.
I have written about and discussed Artificial Intelligence many times over the course of the last 10-15 years. It has become one of my “pet” subjects, as I genuinely believe this technology is simultaneously exhilarating and terrifying.
As a long time “Trekkie” (Star Trek) I can’t help but envisioning a Borg-like human existence, and wonder if A.I. represents the cumulative inventory of contents in Pandora’s Box.
Whether this technology embodies the biggest threat to human existence, or exemplifies the greatest blessing in our relatively brief history, I for one am grateful for Elon Musk’s billion dollar crusade to prevent the AI apocalypse. and believe that counter-technology technology should be on the forefront of any reasonable thinking individual.