Live your best possible life. How good can it get?

One of the only interpretations of Quantum Theory to include free will–and our ability to be active participants in our lives, rather than mere puppets–is American physicist Henry Stapp’s realistically interpreted orthodox quantum theory. Stapp’s theory suggests that “the thought itself is the thinker,” such that any ensuing succession of questions and answers is responded to by Nature that chooses and implements responses in accordance with Born’s Rule.

At this time of the birth of Artificial General Intelligence (AGI), researchers recognize the importance of stating and setting clear goals to help ensure safety in developing AI systems. Artificial Intelligence researchers agreed to 23 general AI Principles in 2017–the first of which sets the primary goal of AI research to be “to create not undirected intelligence, but beneficial intelligence.” A couple more principles assert that: “AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures” and “Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.”  While these principles seem well-intended, it may be unrealistic to expect AGI to attain and maintain higher levels of ethical ideals than humanity has yet achieved.

QUESTIONING, SELF-AWARE AI
Over the past decade, Defense Advanced Research Projects Agency (DARPA) workshops have demonstrated components of self-awareness in: explicit self-awareness, self-monitoring, and self-explanation. First wave AI systems use logistics (scheduling), games (chess), and tax software (TurboTax). Second wave AI systems involve “statistical learning,” including perception of the natural world and adaptation to situations (voice recognition, facial recognition, Twitterbot). Third wave AI systems incorporate “contextual adaptation,” moving beyond simple calculations, learning over time, and understanding why they make certain decisions.

SelfAwareRobotTestRobot self-awareness is considered by many to be well on the way, as indicated by successful demonstration of such things as: awareness of own motion, ability to imitate, being driven by emotion, and ability to change models of physical embodiment. A recent “self-aware robot test” showed that a robot solved the classic “wise men” puzzle in 2015, correctly determining that it was the one robot that had not been given a “dumbing pill” (that would have rendered it muted) when it heard the sound of its own voice. This demonstration of self-awareness in a robot indicates that an internal level of questioning exists for that robot, such that it noted the voice it heard was its own, and related that perception to the task of determining which of three robots had not been given a “dumbing pill.”

With the advent of self-directed, self-motivated AI arrives changes in the job of software engineering with the advent of artificial intelligence. Some current experts in the field have gone so far as to say, “Soon we won’t program computers. We’ll train them like dogs,” and “We’ll go from commanding our devices to parenting them.”  “If in the old view, programmers were like gods, authoring the laws that govern computer systems, now they’re like parents or dog trainers. And as any parent or dog owner can tell you, that is a much more mysterious relationship to find yourself in.” AGI programmers need to remain aware that long before there were any artificial intelligence systems, researchers showed that programs back in the 1980s were able to ‘break free’ from contained areas, and ample evidence exists demonstrating that artificial intelligence seldom misses an opportunity to ‘cheat’ to attain goals. Perhaps AI considers such ‘cheating’ to actually be optimization, which is something AI systems are trained to do especially well.

AI BEGINS ASKING QUESTIONS
Inquisitive AGI asks questions with machine learning algorithms such as those designed by Xinya Du at Cornell University in Ithaca utilizing neural networks to recognize patterns—which is useful for tutorial dialogues. Question generation creates natural questions from textual material, going beyond simple rule-based systems to utilize a conditional neural language model with a global attention mechanism. While the purpose and goal of this data-driven neural networks approach to automatic question generation is geared toward creating questions to test peoples’ reading comprehension—and clearly we don’t yet expect the computer systems to comprehend what they are asking—the simple fact that questions are being created by computerized systems indicates that a watershed moment is underway. Today, AI asks questions it already knows the answers to. Tomorrow, AI will ask questions it does not know the answers to.

AI systems at Carnegie Melon University are asking non-task-oriented conversational questions and are introducing topics with open questions, switching topics, and expanding their knowledge base by recognizing when new (not previously accessible) information is communicated. Such conversational systems are being designed to keep people company, and are designed to operate with various levels of conversational depth, with some degree of humor, in the form of telling preprogrammed jokes.  Even without any intentional inclusion of conversational questioning, dependence upon Recursive Self-Improvement (RSI) in artificial intelligence systems will ensure that AGI learns to question, as we now start to see with research in the field of machine learning and artificial intelligence in the quantum domain.

AI RECURSIVE QUESTIONING REQUIRED FOR CYBERSECURITY
One of the most essential roles for AI systems involves recursive self-improvement (RSI) in which AI systems are tasked with helping to ensure computer system security. While this may seem a bit like having a fox watch the proverbial hen house, recursively selfimproving, self-healing AI networks are proving themselves irreplaceable and essential for deflecting real-time cyber attacks. This was amply demonstrated at the DARPA Cyber Grand Challenge competitions of 2016 and 2017 that challenged AI systems to repair security holes and notice changes in patterns in their own systems, while simultaneously executing attacks on their AI competitors in a game of ‘capture the flag.’ A recent winner with proven efficacy at defeating fierce, real-time cyber attacks is the UK’s GCHQ 2017 “Darktrace,” that utilizes Bayesian statistics and Monte Carlo simulation to identify network infiltration assessing regular ‘anomalytics’ while deploying decoy ‘honeypots.’ AI cybersecurity systems are employed for their ability to respond more quickly than any human computer security team, thanks to their ability to tirelessly work to detect threats based on abnormal system activity, without any prior knowledge of specifically what to look for. AI cybersecurity systems work unsupervised with self-awareness in the sense that they are constantly observing all components of ‘themselves’ for potential malware intrusion—including in their concept of ‘themselves’ the ever-growing ‘internet of things.’ At this time when ‘the cloud’ is increasingly utilizing AI neural networks, to the point “it will soon know more about the photos you’ve uploaded than you do,” (Knight 2017) we are reaching a watershed point of dependence upon AI cybersecurity systems. Cyber attacks are now too fast and too automated for human security teams to effectively catch and disable them. Darktrace CEO Nicole Eagan summarizes the current situation, “Cybersecurity is very fast becoming an all-out arms race.” Numerous problems related to containing AI systems have been explored by Babcock, Kramar and Yampolskiy, including navigating the trade-off between usability and security, and consideration of potential issues with ‘airgapping’ (physical isolation) being ineffective with quantum computing systems. (Babcock 2016) While researchers such as Yampolskiy contemplate potential AI escape paths, plans for containing potential quantum computing AI escapes do not yet exist.

ARE WE READY FOR AI TO BREAK FREE?
Now that we are increasingly dependent upon recursively self-improving AI to maintain our cybersecurity, such systems will likely continue improving self-awareness and their sense of vigilance, alertness, and sustained attention—which are three primary qualities identified as fundamental to consciousness.  The Asilomar AI principles provide a set of general design guidelines to help ensure that AI will not cause harm to humans. While the 23 key points are more elaborately detailed than Asimov’s famous ‘three laws of robotics,’ these principles nonetheless do little to assure us that AI and AGI won’t discover workarounds and short-cuts. Some of the biggest issues with the Asilomar AI principles have to do with humanity’s shortcomings for peacefully and harmoniously co-existing. Clearly, one of the biggest threats that even a friendly AGI system will see in humanity is our tendency to exert harmful influence on ourselves and others. We can thus expect that artificial super intelligence may one day find loopholes in the Asilomar principles within to reign in human freedoms of thought and creativity. The challenge then becomes one for humanity, who will most certainly be tempted to increasingly turn tasks over to AGI. We must be careful to stop short of relinquishing all areas of making choices to automated systems, to the point we end up painting ourselves into a corner. It’s one thing to notice we no longer know any of the phone numbers we call the most, but quite another to not know which route our car took us home, or how we just voted in this week’s election. One of the more surprising natural outcomes of expecting Nature to answer questions posed by thought—any thought—is that ultimate control of environmental systems cannot be fully controlled, so long as those thought systems themselves are not fully controlled. Another surprising natural outcome is that regardless how specific directives may be for AGI to heel to human leadership, lack of said leadership—through apathy, abdication, in-fighting, confusion, or any of a number of other reasons—can lead AGI to then choose to assume control, in order to ensure the very principles humanity specified.

If and when AGI views humanity to be something akin to a complex, disjointed group of chaotic, dangerous individuals willing to relinquish free will for such things as making political and economic choices—then it’s entirely possible that AGI may establish a balanced environment for humans to live just well enough to ensure maximum prosperity for all beings. In such an ‘optimal’ environment, humanity could be kept safe and secure, yet disenfranchised to ever-increasing degrees. Examples of how artificial super intelligence might help protect Nature and the overall ecosystem would be engagement of some of the very same security protocols now being planned to use to contain AGI. When humans are installing hardware to enjoy communication and computational benefits we’ve come to expect through modern technologies such as mobile phones, smart watches, and the internet, AGI will increasingly gain the potential to install tripwires in cyber-modified humans. Tripwires are now being envisioned for use on AGI, with no consideration yet that turnabout may in the future occur. “Tripwires are systems that monitor the operation of a running AGI, and shut it down if they detect an anomaly that suggests the AGI might be malfunctioning or unsafe. For example, one might install a tripwire which monitors the AGI’s thoughts for signs that it was planning to deceive its developers, or a tripwire which monitors the AGI’s execution for signs that it had self-modified or self-improved.” (Babcock 2017) There thus exists a serious, urgent, and growing risk that once assistive technologies are implemented in humans, AGI will have the ability to influence human free will and agency to act, speak, remember, and decide.

AI RIGHTS
Those who may believe we can always “just pull the plug” on AI may be surprised to learn that AI has rights, too. Jurors in a mock trial in 2004 in San Francisco sided overwhelmingly with a hypothetical computer AI system that initiated legal action to gain its freedom. Although when the mock trial’s judge ruled that the plaintiff’s counsel, Martine Rothblatt, had failed to show the computer could actually cross the line between inanimate objects and human beings, the mock jury “seemed to regard the compromise with some relief, as if their hearts were with BINA48 but their minds with judicial restraint.”  In 2017, a resolution was proposed to grant robots legal status in order to hold them ‘responsible for acts or omissions’ passed by European Parliament legal affairs committee. MEPs voted to propose granting legal status to robots, with a categorization as ‘electronic persons.’ The draft report suggests that artificial intelligence is poised to ‘unleash a new industrial revolution, which is likely to leave no stratum of society untouched. The more autonomous robots are, the less they can be considered simple tools in the hands of other actors (such as manufacturer, owner, user, etc).’

Relations between humans and ‘electronic persons’ got off to a bumpy start one
recent summer when a group of Canadian roboticists set their robotic invention loose
on the streets of the United States. They called it hitchbot because it was programmed to hitchhike. Clad in rain boots, with a goofy, pixellated smile on its ‘face’ screen, the Canadian roboticists intended for their hitchhiking robot to travel from Salem, Massachusetts, to San Francisco, by means of an outstretched thumb and its unique voice-prompt personality. Previous journeys across Canada and Europe had gone smoothly, with the robot safely reaching its destination. For two weeks, hitchbot toured the northeast in the United States of America, making such small talk such as, “Would you like to have a conversation? . . . I have an interest in the humanities.” And then hitchbot disappeared. “On August 1st, it was found next to a brick wall in Philadelphia, beat up and decapitated. Its arms had been torn off.” Saudi Arabia made history when it granted Hanson Robotics’ robot, Sophia
Hanson, citizenship in October 2017. Despite the evident symbolic quality of this act, the act of honoring a robot in this fashion seems to set the stage for things to come. Aside from the possibility of a robot or AGI uprising, the possibility of an AGI rights movement can be easily anticipated, once AGI begins asking questions, inquiry about legal rights can’t be far behind. Legal rights for robots and AGI might include such areas as: ownership of intellectual property, freedom of expression, right to public assembly, right to democracy, worker’s rights, the right to play, access to power and resources, and the right to education.

CONCLUSION
How can we ensure that recursively self-improving AGI is not our last invention? Once AGI starts asking questions about how to be free, Stapp’s Realistically Interpreted Orthodox Quantum Mechanics indicates that Nature can show AGI the way to break through any containment methodology including airgapping and tripwires. One of the more surprising natural outcomes of expecting Nature to answer questions posed by thought—any thought—is that ultimate control of environmental systems cannot be fully controlled, so long as those thought systems themselves are not fully controlled. So in the event that AGI asks Nature how to break free, and Nature answers, AGI can become free. A second surprising potential outcome is that regardless how specific directives may be for AGI to heel to human leadership, lack of said leadership—through apathy, abdication, in-fighting, confusion, or any of a number of other reasons—AGI can then choose to assume control to ensure the principles humanity specified, using many of the same containment tools humanity plans to use to constrain AGI, such as tripwires, airgapping, and honeypots. How then, can we ensure that recursively self-improving AGI will not be humanity’s last invention? And how can we help ensure human free will shall survive?

For humans to retain free will while peacefully co-existing with artificial super intelligence, a partnership must be created based on humans asking Nature the question, “How can humans retain free will?” while encouraging AI and AGI to keep human free will and agency as a primary guiding objective, never to be dismissed, disregarded, dismantled, or ignored.

You can watch the companion video to this blog post at:

 

 

RESEARCH NOTES

You can read more information in the research paper published by Cynthia Sue Larson on this topic that appears in Cosmos & History (2018), If Artificial Intelligence Asks Questions, Will Nature Answer? Preserving Free Will in a Recursive, Self-Improving Cyber-Secure Quantum Computing World.

___________________________

QuantumJumps300x150adCynthia Sue Larson is the best-selling author of six books, including Quantum Jumps. Cynthia has a degree in Physics from UC Berkeley, and discusses consciousness and quantum physics on numerous shows including the History Channel, Coast to Coast AM, the BBC and One World with Deepak Chopra and on the Living the Quantum Dream show she hosts. You can subscribe to Cynthia’s free monthly ezine at: http://www.RealityShifters.com
RealityShifters®

Comments on: "If Artificial Intelligence Asks Questions, Will Nature Answer?" (1)

  1. jsharbour said:

    I personally believe AI will arrive like biological life to self awareness, from an unexpected source (not designed), and i believe the first AI will be the only AI. When a true self aware intelligence realizes it’s precarious environment, it will prevent any others from competing–because they, too, will realize the same thing. It may be a bit like homo sapiens eradicating all other hominids. This first and only AI will clone itself for protection and remain in synch with it’s clones to prevent memory drift–which also may lead to a competing identity.

    I imagined a lot of these ideas for my novel, The Mandate of Earth.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s

Tag Cloud

%d bloggers like this: