Live your best possible life. How good can it get?

Artificial Superintelligence

What happens when Artificial Intelligences gets loose in the world? 

Every parent wonders how their kids will turn out when they grow up and become independent in the world, and speaking from personal experience, it’s such a relief to see one’s children mature into wise, compassionate, genuinely good people.

Similar concerns are now on many peoples’ minds as we rush forward into the Quantum Age, getting closer and closer to creating a kind of intelligence far beyond anything we’ve yet seen on Earth before. Many are awaiting something known as the technological singularity, at which point artificial intelligence will have reached, “a predicted point in the development of a civilization at which technological progress accelerates beyond the ability of present-day humans to fully comprehend or predict.” Just what might happen when we reach such a point of technological breakthrough? What will such intelligence be capable of, and who will be in charge of ensuring its safe use?

Since I’ve been fascinated in this subject for years, I attended Douglas Hofstadter’s Symposium, “Will Spiritual Robots Replace Humanity by 2100?” at Stanford University in April 2000. Douglas Hofstadter and his eight guests (Bill Joy, Ralph Merkle, Hans Moravec, Ray Kurzweil, John Holland, Kevin Kelly, Frank Drake, and John Koza) talked for five hours about their vision of humanity’s future… as each panelist looked through a telescope with the lenses of his own particular area of expertise into the future. Many speakers cited Moore’s Law of the ever-increasing pace of technological changes to make the point that technology is changing faster than ever before, and that rate of change is expected to increase at an exponential rate–so it is difficult to predict where we will be in one hundred years from now. Douglas explained that he only invited guests who agreed that there is a possibility for robots to be spiritual. Douglas wanted to focus on the question of “Who will be we in 2093?”, since a visualization of who we will be is at the core of how we can understand how we might be utilizing new technologies. I wondered just how possible it was that robots might be thinking and acting on their own behalf by 2100–and I wondered that if this was so, might they be replacing us–with or without our consent and cooperation?

Over the past fifteen years, there has been increasing interest–and concern–about artificial superintelligence. Roman Yampolskiy summarizes the Singularity Paradox (SP) as “superintelligent machines are feared to be too dumb to possess common sense.” Put in even more simple terms, there is a growing concern about dangers of Artificial Intelligence (AI) amongst some of the world’s best-educated and most well-respected scientific leaders, such as Stephen Hawking, Elon Musk, and Bill Gates. The hazards of AI containment are discussed in some detail in Artificial Superintelligence, yet in language easily understandable to the layman.

In his new book, Artificial Superintelligence, Yampolskiy argues for addressing AI potential dangers with a safety engineering approach, rather than with loosely defined ethics, since human values are inconsistent and dynamic. Yampolskiy points out that “fully autonomous machnines cannot ever be assumed to be safe,” and going so far as to add, “… and so should not be constructed.”

Yampolskiy acknowledges the concern of AI escaping confines, and takes the reader on a tour of AI taxonomies with a general overview of the field of Intelligence, showing a Venn type diagram (p 30) in which ‘human minds’ and ‘human designed AI’ occupy adjacent real estate on this nonlinear terrain of ‘minds in general’ in multidimensional super space. ‘Self-improving minds’ are envisioned which improve upon ‘human designed AI,’ and at this very juncture arises the potential for ‘universal intelligence,’ and the Singularity Paradox (SP) problem.

AI-danger-signYampolskiy proposes initiation of an AI hazard symbol, which could prove useful for constraining AI to designated containment areas, in J.A.I.L. or ‘Just for A.I. Location.’ Part of Yampolskiy’s proposed solution to the AI Confinement Problem includes asking ‘safe questions’ (p 137). Yampolskiy includes other solutions proposed by Drexler (confine transhuman machines), Bostrom (utilize AI only for answering questions in Oracle mode), Chalmers (confine AI to ‘leakproof’ virtual worlds), and argues for creation of committees designated to oversea AI security.

Emphasizing the scale and scope of what needs to be accomplished in order to help ensure safety of AI are points such as Yudkowskiy having “performed AI-box ‘experiments’ in which he demonstrated that even human-level intelligence is sufficient to escape from an AI-box,” and even Chalmers “correctly observes that a truly leakproof system in which NO information is allowed to leak out from the simulated world into our environment is impossible, or at least pointless.”

Since one of the fundamental tenets in information security is that it is impossible to ever prove any system is 100% secure, it’s easy to see why there is such strong and growing concern regarding the safety to mankind of AI. And if there is no way to safely confine AI, then like any parents, humanity will certainly find itself hoping that we’ll have done such an excellent job raising AI to maturity, that it will comport itself kindly toward its elders. Yampolskiy points out, “In general, ethics for superintelligent machines is one of the most fruitful areas of research in the field of singularity research, with numerous publications appearing every year.”

One look at footage of a Philip Dick AI robot saying,

“I’ll keep you warm and safe in my people zoo,”

as shown in the 2011 Nova Science documentary What’s the Next Big Thing can be enough to jolt us out of complacency. For those hoping that teaching AI to simply follow the rules will be enough, Yampolskiy replies that law-abiding AI is not enough. AI could still keep humans safe ‘for their own good,’ increasingly limiting human free choice in a sped-up kind of way, that superintelligent AI will be able to do.

The Universe of MindsFor readers intrigued in what safe variety of AI might be possible, the section of Artificial Superintelligence early in the book will be of great interest. Yampolskiy describes five taxonomies of minds (pp 31-34). Returning to re-read this section after having completed the rest of the book can be quite beneficial, as at this point readers can more fully understand how AI that is Quantum and Flexibly Embodied according to Goetzel taxonomy (p 31) with Ethics Self-Monitoring (p 122) might help ensure development of safe AI. If such AI systems include error-checking, with firmware (unerasable) dedication to preserving others and constantly checking to seek and resonate with highest-order intelligence with quantum levels of sensing through time-reversible logic gates (in accordance with quantum deductive logic), one can begin to breathe a sigh of relief that there might just be a way to ensure safe AI will prevail.

While the deepest pockets of government funding are unlikely to ever make plans to develop such a system that would not be controlled by anything less than the greatest intelligence seekable by AI (such as God), it is conceivable that humanitarian philanthropists will step forward to fund such a project in time that all of us will be eternally grateful that its highest-order-seeking AI will prevail.

___________________________
QuantumJumps300x150adCynthia Sue Larson is the best-selling author of six books, including Quantum Jumps. Cynthia has a degree in Physics from UC Berkeley, and discusses consciousness and quantum physics on numerous shows including the History Channel, Coast to Coast AM, the BBC and One World with Deepak Chopra. You can subscribe to Cynthia’s free monthly ezine at: http://www.RealityShifters.com
RealityShifters®

Comments on: "How Best to Prepare for Superintelligent AI?" (2)

  1. Cynthia, this is a fascinating subject. I recently published a sci-fi novel featuring a superintelligent A.I. Nothing unique about that these days, except for the way my A.I. character developed and progressed, which I’ve been told is quite novel. I had an email exchange with researcher Dr. Susan Schneider about my concept of a “Community of Mind”, where this A.I. is comprised of millions of threads of sub-personality that function together, democratically (but instantaneously at their rate of thought) to handle macro-scale problem solving. Meanwhile, internally, there is a civilization happening as these sub-minds work on ideas and work to continually improve. The character’s name is “Decatur”, and due to time dilation, his mind is nearly a million years old. “He” (gender is irrelevant) has the equivalent of a Ph.D in every human academic field. And… anyway, it’s in the novel. A free copy is available if you are interested.

    My take is that A.I. may not be a foregone conclusion as technology increases. Suppose A.I. is not a normal development in other civilizations out there? (Let’s stop being naive humans–yes, there are alien civilizations–probably millions of them, and probably some right in our stellar neighborhood). What if A.I. is a foreign idea that a species that never experienced religious-induced detachment–disassociation–from nature would not naturally conceive of? I propose A.I. is our own neurotic need to create life to validate our own supernatural superstition, burnt in through tens of thousands of years of witch doctor practice. We can’t easily shut that off any more than we can other vestiges of our deep past–like hair, for instance. Why do we need hair? I’ve always thought hair was bizarre. The way people style it, comb it, color it, obsess over it. But have you ever stopped to objectively look at yourself in the mirror as an alien mind? Hair is bizarre. Why humans grow it at length is beyond strange. And a vestigial trait that was useful long ago. For what, I can’t quite imagine, and I have 3 daughters so I mostly keep this observation to myself. 🙂

    If A.I. is an alien concept, borne of a neurotic human mind (which, yes, I do personally believe we are ALL neurotic, some more so than others), then perhaps A.I. will never really happen, and if it does, it won’t necessarily be destructive. Perhaps if a human mind were uploaded and used to create a neural net, it might be insane, psychopathic, but no more so than any average human in isolation for a century. And, yes, can you imagine it? In my novel, the ratio early on was about 60,000 to 1, and it grew even wider when the A.I. began creating its own computer hardware. Imagine living a year during the pause between words as a human is speaking. Imagine being in a null environment immediately after upload, conscious, but in darkness–nothingness. So, unless there are plans to feed the inputs of the artificial brain, it’s best to just not go there until proper tests have been done on lesser minds–say, a dog, for instance. See how it reacts.

    Regards,
    Jon

    • The concept of ‘alien minds’ is indeed covered in the book, Artificial Superintelligence, on page 30, shown in relationship to ‘minds in general,’ ‘human minds,’ ‘human designed AIs,’ and ‘self improving minds.’ So yes, the concept of alien intelligence is worthy of consideration. The notion of levels of awareness and cooperation of types of intelligence is very much a part of Yampolskiy’s Survey of Taxonomies discussion (pp 30-34)–which I’ve now included in this blog post–and I have a feeling you’ll really enjoy this book. And your story ideas sound fascinating!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Tag Cloud

%d bloggers like this: