Live your best possible life. How good can it get?

Posts tagged ‘artificial intelligence’

Neurotechnology, AI and Enhanced Human Intelligence

David Eagleman and Cynthia Sue Larson

David Eagleman and Cynthia Sue Larson

I attended an invigorating open discussion, “The Future of Neurotechnology: Human Intelligence + Artificial Intelligence,” led by neuroscientist David Eagleman and entrepreneur Bryan Johnson at my alma mater, UC Berkeley. The purpose of this talk was to discuss possible directions as we go forward to incorporate advances in neuroscience with those of Artificial Intelligence (AI), with awareness that there will be some degree of synergy between development of advances in human cognitive enhancement and AI.

Eagleman and Johnson agree that it’s not a matter of IF but WHEN neurotechnology will become reality in our lives. Neuro-tech may not yet be a common household word just yet, but it is definitely well on the way. And in fact, now that most of us hold in our hands devices that allow us to access the internet, we already are starting to get a glimpse of how this merging of technology into the way we make choices, communicate, and remember important people and events in our life will feel.

At this time when venture capitalists are understandably wary about investing in businesses with unproven track records that are operating on the “bleeding edge,” Bryan Johnson explained he invested one hundred million dollars of his own personal money in his company, Kernel, a human intelligence (HI) company to develop the world’s first neuroprosthesis for cognition. Working together with Ted Berger at USC, Johnson is exploring how new technologies might help us improve memory through neuromodulation. Johnson and his team seek to answer the question, “What if we could read and write neural memory in the hippocampus?”

neuropaceIn 2013, Kernel’s NeuroPace proved itself to be a commercial success in quelling epileptic seizures. Future advancements may rely upon such new technologies as neural dust and nanobots.

What does all this have to do with you? In much the same way that transportation is being revolutionized with the coming of robot cars and self-driving vehicles, neurotechnology is poised to transform Human Intelligence (HI) and Artificial Intelligence (AI), while reducing disease, dysfunction and degradation–and enhancing human cognitive functioning.

Neurotechnology Ethical Considerations

Bryan Johnson noted that several people were raising questions and voicing concerns about ethical considerations of human cognitive enhancement–so he asked for a show of hands to indicate how many people felt ethics should be given high priority with regard to neurotechnological advances. Many people (including me) raised our hands, confirming Bryan Johnson’s hunch.

Johnson took note of this, and pointed out that however each of us might feel about the ethical questions involving applying neurotechnology with such things as neural dust–designed to non-invasively enter a human’s peripheral nervous system and sit on the surface of the neurocortex–there will be countries in the world, such as China, that welcome such experimental research with open arms.

The subject of the singularity came up, as one gentleman shared the observation that based on simulations of what happens when AI develops, it appears to be extremely clear that we will need some kind of human enhancement in order to give humans a fighting chance. A variety of simulations of how AI will interact with humanity show that unless everything goes just exactly right, human survival after the creation, expansion, development, and dominance of AI is not a sure thing. We would thus do well to help ensure a more level playing field between humans and AI by boosting Human Intelligence with neurotechnology.

Participants in the discussion voiced the opinion that convergence between machine learning and human cognitive enhancement will be helpful now. One woman in the audience expressed her profound heartfelt desire that wisdom be prioritized in neurotechnological advances as being one of the most important priorities to keep in mind.

nanobotswarmEnvisioning New Neurotechnical Horizons

With regard to envisioning where neurotechnology may go in the next few decades, Johnson and Eagleman spoke mostly in generalities, rather than specifics. Intelligent neural dust, such as that developed at UC Berkeley’s Brain Machine Interface Systems Laboratory involving sensors about the size of a grain of sand, is a form of implantable technology that can be placed in nerves or muscles to treat disorders such as epilepsy, to stimulate the immune system, and to reduce inflammation. Powered by and working with ultrasound, the tiny neural dust can go super-deep inside a body to take measurements and assist in stimulating nerves and muscles. Another new arrival in the new field of electroceuticals will be nanobots, which will be even smaller than neural dust, and can automate tasks such as performing delicate surgical procedures, delivering exact drug dosages, and diagnosing disease; this past year, swarms of nanobots demonstrated promise in precisely targeting and treating cancer.

Job requirements may change once human intelligence and cognitive functioning is neurotechnologically enhanced. We expect that some of our technical professionals receive additional training to become doctors and lawyers–and it’s conceivable that in the not-too-distant future, some professionals may also be expected to undergo neurotechnological enhancement as part of the requirements for the job.

A young man wearing a T-shirt emblazoned “Qualia Research Institute” asked, “What do we do if we find out we are at the local maxima of human cognitive efficiency? How might we be able to tweak it?” to which Johnson and Eagleman pointed out that we should be able to increase our communication input/output rate to a level that is far faster than the slow verbal speech method currently being used during this discussion–since we can all think far more quickly than we can talk.

Fully aware of the irony, I took hand-written notes during this presentation and discussion, and wrote the draft of this article by hand with a pen on paper–clearly NOT the fastest way to do things! Yet, I’ve seen research showing advantages of taking notes by hand, rather than typing things on keyboards. I’ve found my ability to remember and more completely utilize information gets a huge boost when I work from hand-written notes. So while I agree with the inevitability of human enhancement with neurotechnology, I also envision a future in which “old ways” of knowing, communicating, and interacting with others continues to take place, and might even help us ensure that during the coming ascendance of AI, human intelligence ensures its place, too.

incognitoFree Will and the Power to Forget

After the talk, I enjoyed a personal chat with David Eagleman. During their discussion, Eagleman and Johnson had been emphasizing the value of enhancing human intelligence with better memory–and I had a sense that while memory enhancement sounds like a great idea, there are likely some really good natural reasons that we humans so often forget. I pointed out the value of forgetting–in that forgetting can enable us to make quantum jumps to more optimal realities–and this is likely a big factor in the effectiveness of placebo effect healing.

I talked with Eagleman about how he and Johnson had discussed finding ways for neurotechnology to enhance cognitive functioning by reading and writing information to the hippocampus–pointing out that we’ll likely see the that the hippocampus will grow when written to.

I voiced my support for putting human intelligence in the Open AI project, to minimize and prevent attempts to control AI and HI by one or a few governments or corporations.

We ended our conversation discussing ‘free will,’ which David reminded me he does not believe in, per se, as he describes in his book, Incognito. I suggested he consider the work of Thomas Metzinger and Max Velmans with consideration of first person and third person levels of representational self-modeling and levels of awareness. It’s clear that systems that are missing a few lines of code that constantly remind them they are representational models bear more than passing similarity to humans.

I’m inspired to see that David Eagleman’s Laboratory for Perception and Action at Stanford University seeks to understand how the brain constructs perception, how different brains do so differently, and how this matters for society–with special focus in four specific areas of: time perception, sensory substitution, synesthesia, and neurolaw. After giving some thought to neurotechnology, it’s clear to see the growing significance of the emerging interdisciplinary field of neurolaw.

Join the Conversation

My personal bias involves a preference to explore strengthening my awareness of what consciousness is and how it operates, working with natural human abilities that have historically been neglected, ignored or forgotten as technology has advanced. Some of my bias may be due to my being what is called an “exceptional human experiencer,” since I am a near-death experiencer, I am a meditator, I am a lucid dreamer, I have had a kundalini awakening experience, and I was ‘born aware’ (meaning I remembered being conscious prior to being born). Exceptional human experiences can provide people with access to heightened abilities to do some of the things we might also hope to enhance through neurotechnology–and I see a study of neurotechnology as potentially providing us with greater insights into optimizing our natural human abilities.

I’d love to hear your comments, thoughts and feelings about the future of neurotechnology. This is a controversial topic, that I hope you will contemplate and talk to people about it, thus helping set the direction for how humanity continues to evolve with technology. Some people are understandably skeptical or concerned about neurotechnology, while others are excited about the possibilities, and others yet don’t yet have strong feelings one way or the other. My gut feeling is that AI is coming, as is human cognitive enhancement. Humanity will do well to envision how we see ourselves in the future, and what we consider optimal in terms of working with neurotechnology in the future. I tend to agree with Eagleman and Johnson that it’s not a matter of if, but when, this technology will arrive. And those of us like myself who still don’t yet have cell phones can be hold-outs for a while (or in my case now, decades), yet all of us will eventually be affected in some way by these technologies.

___________________________

QuantumJumps300x150adCynthia Sue Larson is the best-selling author of six books, including Quantum Jumps. Cynthia has a degree in Physics from UC Berkeley, and discusses consciousness and quantum physics on numerous shows including the History Channel, Coast to Coast AM, the BBC and One World with Deepak Chopra and on the Living the Quantum Dream show she hosts. You can subscribe to Cynthia’s free monthly ezine at: http://www.RealityShifters.com
RealityShifters®

How Best to Prepare for Superintelligent AI?

Artificial Superintelligence

What happens when Artificial Intelligences gets loose in the world? 

Every parent wonders how their kids will turn out when they grow up and become independent in the world, and speaking from personal experience, it’s such a relief to see one’s children mature into wise, compassionate, genuinely good people.

Similar concerns are now on many peoples’ minds as we rush forward into the Quantum Age, getting closer and closer to creating a kind of intelligence far beyond anything we’ve yet seen on Earth before. Many are awaiting something known as the technological singularity, at which point artificial intelligence will have reached, “a predicted point in the development of a civilization at which technological progress accelerates beyond the ability of present-day humans to fully comprehend or predict.” Just what might happen when we reach such a point of technological breakthrough? What will such intelligence be capable of, and who will be in charge of ensuring its safe use?

Since I’ve been fascinated in this subject for years, I attended Douglas Hofstadter’s Symposium, “Will Spiritual Robots Replace Humanity by 2100?” at Stanford University in April 2000. Douglas Hofstadter and his eight guests (Bill Joy, Ralph Merkle, Hans Moravec, Ray Kurzweil, John Holland, Kevin Kelly, Frank Drake, and John Koza) talked for five hours about their vision of humanity’s future… as each panelist looked through a telescope with the lenses of his own particular area of expertise into the future. Many speakers cited Moore’s Law of the ever-increasing pace of technological changes to make the point that technology is changing faster than ever before, and that rate of change is expected to increase at an exponential rate–so it is difficult to predict where we will be in one hundred years from now. Douglas explained that he only invited guests who agreed that there is a possibility for robots to be spiritual. Douglas wanted to focus on the question of “Who will be we in 2093?”, since a visualization of who we will be is at the core of how we can understand how we might be utilizing new technologies. I wondered just how possible it was that robots might be thinking and acting on their own behalf by 2100–and I wondered that if this was so, might they be replacing us–with or without our consent and cooperation?

Over the past fifteen years, there has been increasing interest–and concern–about artificial superintelligence. Roman Yampolskiy summarizes the Singularity Paradox (SP) as “superintelligent machines are feared to be too dumb to possess common sense.” Put in even more simple terms, there is a growing concern about dangers of Artificial Intelligence (AI) amongst some of the world’s best-educated and most well-respected scientific leaders, such as Stephen Hawking, Elon Musk, and Bill Gates. The hazards of AI containment are discussed in some detail in Artificial Superintelligence, yet in language easily understandable to the layman.

In his new book, Artificial Superintelligence, Yampolskiy argues for addressing AI potential dangers with a safety engineering approach, rather than with loosely defined ethics, since human values are inconsistent and dynamic. Yampolskiy points out that “fully autonomous machnines cannot ever be assumed to be safe,” and going so far as to add, “… and so should not be constructed.”

Yampolskiy acknowledges the concern of AI escaping confines, and takes the reader on a tour of AI taxonomies with a general overview of the field of Intelligence, showing a Venn type diagram (p 30) in which ‘human minds’ and ‘human designed AI’ occupy adjacent real estate on this nonlinear terrain of ‘minds in general’ in multidimensional super space. ‘Self-improving minds’ are envisioned which improve upon ‘human designed AI,’ and at this very juncture arises the potential for ‘universal intelligence,’ and the Singularity Paradox (SP) problem.

AI-danger-signYampolskiy proposes initiation of an AI hazard symbol, which could prove useful for constraining AI to designated containment areas, in J.A.I.L. or ‘Just for A.I. Location.’ Part of Yampolskiy’s proposed solution to the AI Confinement Problem includes asking ‘safe questions’ (p 137). Yampolskiy includes other solutions proposed by Drexler (confine transhuman machines), Bostrom (utilize AI only for answering questions in Oracle mode), Chalmers (confine AI to ‘leakproof’ virtual worlds), and argues for creation of committees designated to oversea AI security.

Emphasizing the scale and scope of what needs to be accomplished in order to help ensure safety of AI are points such as Yudkowskiy having “performed AI-box ‘experiments’ in which he demonstrated that even human-level intelligence is sufficient to escape from an AI-box,” and even Chalmers “correctly observes that a truly leakproof system in which NO information is allowed to leak out from the simulated world into our environment is impossible, or at least pointless.”

Since one of the fundamental tenets in information security is that it is impossible to ever prove any system is 100% secure, it’s easy to see why there is such strong and growing concern regarding the safety to mankind of AI. And if there is no way to safely confine AI, then like any parents, humanity will certainly find itself hoping that we’ll have done such an excellent job raising AI to maturity, that it will comport itself kindly toward its elders. Yampolskiy points out, “In general, ethics for superintelligent machines is one of the most fruitful areas of research in the field of singularity research, with numerous publications appearing every year.”

One look at footage of a Philip Dick AI robot saying,

“I’ll keep you warm and safe in my people zoo,”

as shown in the 2011 Nova Science documentary What’s the Next Big Thing can be enough to jolt us out of complacency. For those hoping that teaching AI to simply follow the rules will be enough, Yampolskiy replies that law-abiding AI is not enough. AI could still keep humans safe ‘for their own good,’ increasingly limiting human free choice in a sped-up kind of way, that superintelligent AI will be able to do.

The Universe of MindsFor readers intrigued in what safe variety of AI might be possible, the section of Artificial Superintelligence early in the book will be of great interest. Yampolskiy describes five taxonomies of minds (pp 31-34). Returning to re-read this section after having completed the rest of the book can be quite beneficial, as at this point readers can more fully understand how AI that is Quantum and Flexibly Embodied according to Goetzel taxonomy (p 31) with Ethics Self-Monitoring (p 122) might help ensure development of safe AI. If such AI systems include error-checking, with firmware (unerasable) dedication to preserving others and constantly checking to seek and resonate with highest-order intelligence with quantum levels of sensing through time-reversible logic gates (in accordance with quantum deductive logic), one can begin to breathe a sigh of relief that there might just be a way to ensure safe AI will prevail.

While the deepest pockets of government funding are unlikely to ever make plans to develop such a system that would not be controlled by anything less than the greatest intelligence seekable by AI (such as God), it is conceivable that humanitarian philanthropists will step forward to fund such a project in time that all of us will be eternally grateful that its highest-order-seeking AI will prevail.

___________________________
QuantumJumps300x150adCynthia Sue Larson is the best-selling author of six books, including Quantum Jumps. Cynthia has a degree in Physics from UC Berkeley, and discusses consciousness and quantum physics on numerous shows including the History Channel, Coast to Coast AM, the BBC and One World with Deepak Chopra. You can subscribe to Cynthia’s free monthly ezine at: http://www.RealityShifters.com
RealityShifters®

Tag Cloud

%d bloggers like this: