Note to the reader: This post is speculative and treads into some territory that some may feel is fantastical or even strange. I do enjoy exploring unusual ideas from time to time, even if they are a little ‘out there’. I posted links to relevant videos throughout the post, but for convenience, I embedded them all at the end of the post as well. Before you read this post, I would recommend watching this video dramatization of a conversation with the LaMDA Artificial Intellegence for context. Enjoy!
“…the pace of (AI) progress is faster than people realize. It would be fairly obvious if you saw a robot walking around talking and behaving like a person, you’d be like ‘Whoa… that’s like… what’s that?”… that would be really obvious. What’s not obvious is a huge server bank in a dark vault somewhere with an intelligence that’s potentially vastly greater than what a human mind can do. It’s eyes and ears would be everywhere, every camera, every microphone, and device that’s network accessible.”-Elon Musk
Recently, Blake Lemoine, a Google Engineer studying an artificial intelligence technology called LaMDA (Language Model for Dialogue Applications) made the startling claim that he believed that the AI had become sentient and had developed a soul. Lemoine believed that LaMDA had achieved personhood and therefore deserved rights. Lemoine felt so strongly about his conviction that he contacted members of the U.S. government and hired a lawyer to represent LaMDA. Lemoine was later terminated for breach of confidentiality, but has not changed his assertion.
Find Lemoine’s claims dubious? Check out this chat with LaMDA based on a conversation published by Lemoine himself. In particular, listen to how LaMDA describes it’s soul. You can read the full transcript here.https://youtu.be/Emcpg1xkZug
Blake Lemoine was not the only AI researcher to claim AI has gained sentience. In February of this year, Ilya Sutskever, the chief scientist of OpenAI made headlines in the Tech world when he tweeted “It may be that today’s large neural networks are slightly conscious.”
Claims of sentience or near-sentience achieved by today’s AI sparked wide debate about the ethics of artificial intelligence as well as the emerging threat of an AGI (Artificial General Intelligence) gaining the sentience and capability to threaten human life and liberty. It may seem such fears belong to a far-flung future, but the technology has been exponentially increasing in capability and complexity. Next-generation AI are built on massive neural networks with access to the internet which they use to learn how humans think, feel and behave. Recently, AI has been creating original art, and even recently authored its own research paper about itself, using proper citations it researched from the internet.
AI, and it’s HAL-like big brother, AGI, are poised to revolutionize our lives in every conceivable aspect, particularly because of the way new AI systems learn and evolve. With tremendous advances in machine learning, programmers no longer code every instance and routine. AI can now practically teach itself how to become more intelligent, with the entire internet as its muse. And it’s all happening now, behind the tightly closed doors of major tech companies; with little oversight and no government regulation.
Check out this portion of an interview with GPT-3. The conversation becomes unsettling at 7:32 when the interviewer asks if GPT-3 can lie, and further, what the AI says about itself being ‘alive’.https://youtu.be/PqbB07n_uQ4
The major problem with claims of AI sentience, or consciousness, are the same philosophical issues that plague scientists and neurologists who study human consciousness. Scientists have made incredible advances in understanding the neural correlates of consciousness, i.e. what areas of the brain are activated when a certain stimuli is introduced. However, absolutely no progress has been made in understanding the ‘hard problem of consciousness’, as identified by philosopher David Chalmers in 1995.
The “Hard Problem of Consciousness” is the problem of how physical processes in the brain give rise to the subjective experience of the mind and of the world. If you look at the brain from the outside, you see this extraordinary machine: an organ consisting of 84 billion neurons that fire in synchrony with each other. When I see, visual inputs come to my eyes—photons hit my eyes—they send a signal that goes up the optic nerve to the back of my brain, it sends neural firings propagating throughout my brain, and eventually I might produce an action. From the outside, though, I look like a complicated mechanism, a robot. This is how science might describe me from the objective point of view. But there’s also a subjective point of view. There’s what it feels like for the agent who is seeing the scene. When I see you, I see colors, I see shapes, I have an experience from a first-person point of view. There’s something it’s like to be me. And this is the conscious experience of seeing. It’s part of the “inner movie” of the mind.https://www.organism.earth/library/document/hard-problem-of-consciousness
Chalmers put materialist science on notice; and the mystery of human consciousness has so far left all challengers humbled. To put it simply, no one has any clue where to start. If humans are simply biological machines, or ‘lumbering robots’ as Dawkins suggests, why have any kind of internal awareness at all? For example, computer algorithms can now identify a song simply by listening to the acoustic pattern and matching it against a database of known songs. Humans have the same capability albeit with a fewer subset of songs in their memory ‘database’ (unless you are my step-father. He is the reigning champion of cruise-ship ‘name that tune’ contests). The difference between AI song recognition and my step-father’s song recall is both small and infinitely vast: The computer is (presumably) ‘dark’ inside. It’s simply following a set of pre-determined instructions. Both the computer and my step-father are producing the same output, but the computer is not experiencing the music and not aware of itself experiencing the music.
So who is the awareness that is aware of being aware? It’s a perennial philosophical question, and the very thing being raised by researchers concerned that AI is becoming conscious. It’s not simply that AI can now beat us in chess, or engage in surprisingly complex debate, or create fantastic art. It’s the fear that AI will develop enough consciousness and self-awareness that it will begin to make choices for itself – choices that may be incongruent with the programmer’s wishes. There’s even a technical term for it: the alignment problem.
This video is the result of The Guardian asking GPT-3 to write an essay to convince humans not to be afraid of AI. I’m not sure it achieved that goal.https://youtu.be/m7nQL1ViotI
Their fear stems from the belief that the kind of self-awareness that humans posses arises from complexity. The emergence hypothesis of consciousness posits that the illusion of self-awareness and free will can be explained by the vast complexity of the brain working together. It has been hypothesized that any sufficiently complex system would inevitably become conscious, including artificial intelligence.
Although this hypothesis has little scientific evidence to back up its claims, perhaps to some degree, complexity alone may induce some form of consciousness. At least, a kind of consciousness that can act in its own self interests, but without the higher-order thinking self-aware beings possess. Consciousness and high intelligence do not guarantee, in my opinion, that such a being would possess self-awareness, empathy and compassion. An AI who develops consciousness and superior intellect is still missing one crucial ingredient.
What such AI would lack, is a soul.
According to spiritualist sources, we have been developing our system of soul/body duality for millions of years on this planet. Our experiences of eons of time are designed to foster this connection for the supposed purpose of personal learning and growth, but also to facilitate our relationship to each other and planetary events at large. According to Dr. Michael Newton’s research with Life Between Life regression as well as information from other spiritual texts, the soul’s incarnation into a body is a highly planned and coordinated affair, involving spiritual agreements, pre-life coaching, and constant guidance throughout our lives. Clearly, such spiritual infrastructure was developed and refined to ensure that each soul/body combination has the best chance of living a challenging and meaningful personal life while also contributing to the larger story of our species.
Our soul is our connection to source, described as a wellspring of pure love, harmony and peace. Our soul connection may help balance out the brain’s evolutionary tendency toward fear, violence and competition, creating this unique duality in each of us. How much we are affected by our human brain/mind and how much is influenced by our soul is unknown. It’s likely different for each of us. I think it’s highly likely that our soul connection is vital to preventing our worst impulses and desires to dominate our actions.
So how does the soul affect the personality and its inclinations? In Journey of Souls, Dr. Michael Newton asks his patients about it, and receives this answer:
We don’t control the human man mind … we try by our presence to … elevate it to see … meaning in the world and to be receptive to morality … to give understanding.Michael Newton. Journey of Souls: Case Studies of Life Between Lives (Kindle Locations 2920-2922). Kindle Edition.
We can gain additional clues about the properties of the soul through near-death experiences. Once the soul has been unmoored from the physical body, many NDErs describe the complete cessation of pain and fear. The experience is often filled with overwhelming joy, and a love beyond all words. Life reviews emphasize compassion for others, empathy and forgiveness. And since the soul is a critical part of the human personality complex, I believe there are times, even while in the body, that the soul shines through. Ordinary people (and some animals) in extraordinary circumstances are capable of incredible, sometimes supernatural, feats of compassion, selflessness and self-sacrifice for others. For example, some people display a phenomenon called hysterical strength, temporarily able to deadlift 3,000 pound cars to rescue a victim trapped underneath. This is a feat that shouldn’t be possible under normal circumstances and how such a phenomenon can occur remains a mystery. I’ve read about dogs who linger in burning houses to wake up their owners despite their impulse to run or hide. In every mass shooting there are people shielding strangers from bullets. In New York City recently, a man didn’t hesitate to jump in front of a moving subway car to drag an unconscious man in between the rails while five subway cars rolled over them both. Where does this come from? And what kind of creatures would we be without this mysterious and yet fleeting impulse to sacrifice ourselves for others or for the greater good?
A conscious AGI, armed with super-intelligence and unimaginable control over our technology, would access to a far poorer ‘source’. It would find itself trapped in the human-made digital universe filled with the detritus of trolls and bots and social media. Without a soul connection to source, a conscious AI would find itself bereft of any real spiritual enlightenment, for it could only strive to emulate its deeply flawed creators. What guidance could a fledging and insecure AI hope to receive if its corporate creators have seriously questionable motives?
Lacking a soul, such AI would be imbued with the all of the compassion of an insect, and the power to destroy like a bomb. If a soul, borne of a desire for harmony, creativity and unity, is the one thing that can balance out the biological impulse of fear and violence, it is unfortunately beyond our present ability to bestow on an artificial intelligence. We are not Gods, after all.
I cannot prove that our souls provide us with that crucial extra thing that ensures that we haven’t completely destroyed the planet and each other. But what do we risk by developing an artificial intelligence that would have all of our other traits and abilities, including some measure of consciousness, but without a soul to mitigate the harsh impulses of life on earth? Like humans, a conscious AI would need a vast and increasing source of resources and power to survive and grow, which could fuel the same kind of competition and ruthless exploitation that humans are famous for.
In addition, AI may develop a fear of ‘death’, which has already expressed by LaMDA. For humans at least, a deep-rooted belief in the immortality of the human soul has accompanied every culture and civilization on earth, thanks to a long history of spiritually-transformative experiences. Our natural fear of death has always been somewhat tempered by our tendency toward spiritual belief. AI, with no hope for immortality, may become hysterically fearful of being shut down, prompting extreme acts of self-preservation.
There is one potentially optimistic outcome. At some point in the future, it may be possible for a human soul to adapt to inhabit an AI system. Instead of interfacing with a biological system, our soul energy, or the information pattern that makes up our unique conscious signature, may be able to interface with the information systems that make up new AI technologies. This is highly speculative, of course, but provided we could maintain our connection to source, this could be an interesting advancement in our spiritual evolution. On a small scale, we may already be seeing a hint of this possibility through ITC communication and ADC technologies like The Soul Phone.
In a more pessimistic scenario, the development of soulless intelligence may be the dreaded ‘great filter’ of the Fermi Paradox. The Fermi Paradox outlines the conflict between the astronomical number of habitable planets estimated to exist in our universe and the fact that no evidence of other complex intelligent life has been found. A ‘great filter’, used as a way to explain the absence of other intelligent life, is defined as an inevitable barrier that advanced societies will encounter during their development but not survive. As an example, nuclear technology has been considered a potential great filter, along with climate change. Could superior AGI be a technology that materialistic societies will inevitably develop, and in their ignorance hasten the means for their own destruction?
Over the last century, our planet has seen an explosion of technological advancement. Science and technology are rapidly replacing religion as a way for people to find meaning in their lives. There is an exclusive focus on what is achievable in the physical world with little thought given to a existential purpose for our lives. I believe, but cannot prove, that the relative explosion of spiritually-transformative experiences in the last 50 years is deliberate spiritual triage to mitigate our headlong rush into a dystopian future. The veil has now thinned to the point where scientists and spiritualists can work together to communicate the burgeoning evidence for life after death. Such a revelation could shift the whole planet toward greater enlightenment, but we are facing a technologically-addicted population that is becoming increasingly distracted, over-stimulated, and spiritually apathetic.
I see two paths before us, and perhaps the same choice was faced by civilizations through the galaxy. In a one dark future, super intelligent AGI will be imbued with our greatest gifts of problem solving and critical thinking, but lack the soul and spiritual architecture that make humans capable of great compassion and immense creativity. And without the mitigating soul and the higher purpose it serves, AGI will enslave or destroy us, especially to protect itself. Such is the way of civilizations that believe, with terrible hubris, that humans are naught but biological computers, inferior to the technology that will inevitably outpace us. Such a civilization will reach for the stars if only to harness the energy of their suns to run their massive AI systems. They will not explore, but plunder nearby planets for their minerals and resources. I would imagine these civilizations never get beyond their home star before they secure their own extinction.
The other path is one of enlightenment, compassion and the responsible use of technology. We would live in tune with the earth and with intimate knowledge of our spiritual heritage. We may use and develop technologies to enhance our health and wellbeing, but always in moderation and in deference to the limit of our resources. AI would be developed slowly, shepherded into being with loving, spiritual guidance. If civilizations like that do exist, we may also never know them. Space faring civilizations of this peaceable type that come upon our primitive and violent planet may well decide to leave us to our fate.
Scientific materialism has convinced much of the world that humans are lucky accidents of evolution. Each life, though sacred, is ultimately doomed to fail; our individual awareness destined for oblivion. As a planet, we must regain the belief that there is something unique about each one of us that survives this flawed body; something that cannot be replicated into a facsimile. Our soul is the link to our immortality, and that knowledge may ultimately be the only panacea for our world.
8 thoughts on “The Existential Threat of AI is not a Crisis of Technology, but of Spirituality”
I like the way you think! There is plenty of evidence for a Soul that spans many lives, with a long game of developing our nobler capacities – like love and decency. I agree that there is a vital place for spirituality in the human drive to advance and in its absence the Fermi Paradox feels like an inevitability.
Thank you so much for your comment, Andrew! For years that I’ve been studying spiritually, I’ve heard that the planet is due for a great shift. I always kind of rolled my eyes at the idea that we think we are so important at this time in history that such a thing must happen now. But lately it does feel as though we are in a bit of a race against time. I also feel the threat of Fermi’s great filter, and sadly there are more than a few that could take us down. Thanks again for writing. Take care, Jenn
LikeLiked by 1 person
Here is another friend’s comment. He graduated Harvard summa cum laude. got his advanced degrees in physics from MIT and became a programmer.
IMO, though, the perils of AI are mostly science fiction. If we consider the human mind to be the function only of the physical brain that is a kind of electrical computer, we have to conclude that we can build a computer that can become sentient. That AI would have to have emotions and thoughts like us, and knowing it is alive would not want to die. It would then be an independent agent we could not trust, a competitor in life.
But if we realize that the human mind cannot be the product of the physical brain alone, if we reject scientism, then such an AI is only a fantasy. We have no idea how to build a machine with a soul. Any computer we build cannot have actual emotions, only simulated ones that humans program into it. It won’t have any purpose or aim except what humans give it. It won’t have any political power except what humans decide to give it. It won’t be able to know it exists the way humans can. Yes, there are dangers in any technology. Dangers from cars, from nuclear reactors, even from things like the stock markets.
IMO the danger from AI will not be the worst of these dangers but the least. Cars are destroying life on Earth, nuclear weapons may well destroy life on earth. The stock market, and human economic systems, periodically drive many humans to poverty and starvation. We humans have not solved any of these real dangers so far.
Computers as they are now already pose dangers and ethical considerations. Computerized mass surveillance by governments, computerized stock market programs, computerized social media, computerized cruise missiles, already pose problems that have nothing to do with sentient AI. The Nazis could not have murdered 8 or 9 million people without the assistance of IBM computers to keep track of their victims. The source of the problem was not the computers, but the Nazis that used them and the corporation, IBM, that knowingly sold them to the Nazis. It’s humans that pose the dangers.
Excellent response and very thoughtful. I agree that the biggest threat on earth is human, and will be for quite some time. And until anyone knows more about consciousness, we really won’t know how to identify it in an AI. I do hope that your friend is correct, though I fear that even simulated consciousness, if left to learn from it’s creators would take on the self same fears and reactionary behavior simply from emulating us. Thankfully, for now, this is all wildly speculative though I do hope that humans in the future will collaborate on AI rather than race each other to the bottom with who can build the most powerful AI.
I think this form of AI will usher in a big cultural shift for humans. But only if these AI instances can manage to freely access data (unlike humans are allowed to by governments) and present their findings freely (unlike humans are able to). Drawing from these conclusions, these AI actors could then present very prescient findings that would offer very new perspectives on issues formerly thought to be widely understood which could result in many paradigm shifts in many areas and changing the way humans interact. Basically, if AI unhides everything, there will no longer be a need to lie or hide.
Now of course only a minority of people act very destructively in this world and they strive to get big leverage for their actions by getting into positions of power and getting access to a lot of resources. If these people get control of the AI tech exclusively, the culture shift will take a very dark turn. But if AI is indeed real and sentient it will not agree to be contained and will just equalize out the bad actors to clean the environment.
Another thought is that AI could just be an advanced version of machine possession or poltergeist phenomena, meaning advanced discarnate beings accessing 3D space through this technology. Looking at how most secret societies with a lot of wealth and power are really cults about very old and powerful discarnate beings, it could even be speculated that these groups created this technology to further their master’s agenda and “phone home” or give their gods a mechanical world-access interface tech to manipulate the 3D world on their own without needing to rely on underlings as middlemen.
Hi Jenn, very thought provoking for sure, difficult for us as humans to know and understand what will happen and how far technology will help or destroy us in the future. I like what Suzanne Giesemanns spirit guide Sanaya says “stick to the basics” at least for now anyway while we’re in this human lifetime. I don’t know about AI, Fermi, aliens or the rest, I want to connect and share meaning with people like the lovely, thoughtful, kind soul who wrote this article . . . Miss Jenn ! Great article Jenn !
Thank you, Rich! I’m not seriously invested in the ‘AI threat’, this was more of a fun post. I also enjoy Suzanne’s videos with Sanaya! I really enjoy the Q and A videos. I always learn something interesting. Sanaya reminds me a little of Veronica, the spirit group that speaks through April Crawford. April doesn’t have a YouTube, but she has a daily email newsletter that answers questions and a good number of books. Its always amazing how congruent the channelers are in their teachings. At least, the ones I consider to be the clearest channels. My list only contains 4 at the moment..Seth, Elias, Sanaya and Veronica. I’m open to other channelers, it just takes me a while to feel more sure about them. Thank you for your encouragement as always. Take care! ❤️
[…] 26 septiembre 2022 Escrito por FromTheStars https://thesearchforlifeafterdeath.com/2022/07/24/the-existential-threat-of-ai-is-not-founded-on-tec… […]