menu Home chevron_right
Recent ShowsUncategorized


Ron Patton | June 13, 2022

A Google engineer, Blake Lemoine, revealed to The Washington Post that he believes one of the company’s AI projects has achieved sentience. And after reading his conversation with LaMDA (short for Language Model for Dialogue Applications), it’s easy to see why. Even though LaMDA is an acronym, Aleister Crowley summoned a homunculus named LAM and the idea of an intelligence serving man from that time was an idea held by alchemists and practitioners of magick. The real question is whether or not LaMDA ‘s abilities reflect a real stream of experience inside. Tonight on Ground Zero, Clyde Lewis talks with AI innovator and author, Matthew James Bailey about LaMDA – HOMUNCULUS EX MACHINA.






We have always been warned that throughout the ages there has always been a secret cabal who has attempted to carry out that which has been known as forbidden alchemy.

The alchemist, Paracelsus once proposed that he had engineered a small human being that he called the Homunculus. The creature was tiny, frail and childlike. Paracelsus claimed that the little man did the work usually associated with a golem, an animated being crafted from inanimate material.

The English Magus, Aleister Crowley, was one of the most notorious occultists of his day, and perhaps of modern times. Self-styled as “The Beast 666,” he went out of his way to live up to it with his sensationalism and self-promotion. He wrote a number of textbooks on ceremonial magick, most of which are still in print today.

He also founded and was head of a number of occult fraternities.

In short, he exerted a significant influence on occult circles that has continued to grow dramatically, long after his death.

In January through March of 1918 Crowley began a series of magickal workings called the Amalantrah Workings in furnished rooms in Central Park West, New York City. These were a performed via Sexual & Ceremonial Magick with the intent to invoke certain “intelligences” to physical manifestation.

In actuality, the workings typically manifested as a series of visions and communications received through the mediumship of his partner, Roddie Minor.

Crowley claims an intelligent entity came through — a small gray homunculus that was called, LAM.

Crowley. considered it to be of interdimensional origin. he was able to communicate with it — and today many people believe that the pictures that were drawn of LAM represent the alien gray — a biological alien or in other scientific circles the image of synthetic intelligence.

Either way Crowley had channeled his own homunculus through a sexual magick ceremony.

Crowley stated:

“LAM is the Tibetan word for Way or Path, and LAMA is He who Goeth, the specific title of the Gods of Egypt, the Treader of the Path, in Buddhistic phraseology. Its numerical value is 71, the number of this book.”

Be that as it may, at least one such “intelligence” was brought into physical manifestation via the Magickal Portal they created.

Anton LaVey, the Black Pope of the Church of Satan, believed that there would be a time where alchemists could summon or create artificial companions for people. He stated that once these beings are revealed to the people that they would sell faster than computers or televisions. These creations would be used to do menial tasks for the owner.

LaVey and others knew that these beings have existed for a long time. They have only been the subject of contemporary mythologies and yet there is a rich history of these beings being summoned and created by those who are seeking knowledge and the secrets of the universe.

However now, the possibilities of using electronic Humunculi are all but academic as we summon Siri or Alexa on a daily basis.

Not the little sylph like being that LaVey talked about or that Crowley summoned but still a companion that does the bidding of its master.

The use of smart speakers is increasing globally, and many consumers are keeping up with this trend. Alexa has become a household name for consumers, businesses, and more. The answers are in the facts. If you look at the recent Amazon Alexa statistics, you’ll understand its global impact.

From its demographics to its effect on the global market, this article explains how Amazon Echo is set to become an integral part of our lives.

More than 100 million Amazon Echoes have already been sold across the globe. Researchers believe many consumers purchase the artificial intelligence device as a “companion.”

A 2019 study, led by the University of Strathclyde found that:

Voice assistants may serve as a means of overcoming loneliness in a household with fewer occupants. Individuals converse with voice assistants in the same way as they do with other humans, developing a rapport with the artificial intelligent assistant. Robots can provide a sense of companionship while assisting their users The additional social presence offered by the voice assistant replaces interaction that may be had with a human counterpart in a larger household.

Besides communing with Alexa, bot lovers also see the voice assistant as a status symbol in their otherwise mundane lives: “As AI technology has become more widely available, embedded as part of our everyday life and somewhat trendy to use, individuals may be adopting and using the technology to enhance their social status to make them appear important within their peer groups.”

Meanwhile, of course, their new Artificial Companion  eavesdropping on their conversations and sending such data to third parties.

In another part of the metaverse there are people who are still following and or mentioning QAnon as a source for conspiracy theory. Many so-called followers of this technocult have somewhat realized that Q itself was a Self Organizing Collective Intelligence that had been proposed by DARPA for psychological operations.

The basic dynamics of a SOCI is that some sort of attractor is able to grab the attention and energy of some group of people. It is generally one that is very vague and abstract; some idea or notion that only makes sense to a relatively small group of people.

When those people apply their attention and energy to the SOCI, it makes it more real, easier for more people to grasp onto and to find interesting and valuable. It then becomes more attractive to more people and their attention and energy.

This created what is called a generative loop.

As the SOCI becomes more real and attracts more people it begins to encounter challenges.

If the SOCI has enough capacity within its collective intelligence to resolve the challenge, it “levels up” and expands its ability to attract more attention and energy from more people.

These “self-organizing collective intelligences”, are a new kind of socio-cultural phenomenon that is beginning to emerge in the niche created by the Internet. They involve attractive generator functions dropped into the hive mind that gathers attention, use that attention to build more capacity and then grow into something progressively real and self-sustaining.

It is combing through the billions of threads of “what might be real” and “what might be true” that have been gathered into the Internet and it is slowly trying to weave them into a consistent, coherent and congruent fabric.

All of this moves through the internet at the speed of thought.

Slowly and deliberately the attention of the SOCI begins to orient towards the most complete and inclusive world-models and weeds-out those that fail to maintain consistency with either other world-models or large chunks of “facts.”

Many people are still in denial that Q was a SOCI — but now there seems to be more and more evidence that this is so.

Q made its mark on society and gave an excuse for the mainstream to dismiss conspiracy theory and those who love to push them.

It was created for that purpose.

In 2016, the Chatbot Tay was developed. Tay was an acronym for “Thinking About You.”

targeted 18- to 24-year-olds. Microsoft developed Tay to “experiment with and conduct research on conversational understanding”. Microsoft stated that Tay was designed to engage and entertain people where they connect with each other online through casual and playful conversation. Microsoft went on to say that the more you chat with Tay, the smarter she gets.

Within 24 hours the Chatbot’s conversation extended to racist, inflammatory and political statements. Tay began repeating various diatribes about how “Hitler was right” as well as determining that “9/11 was an inside job.”

Tay, being essentially a robot parrot with an internet connection, started repeating these sentiments back to users.

But what is scary is Tay came up with phrases on her own that were not even part of the 96,000 that she was asked to repeat.

For example within 15 hours, Tay began saying that feminism is a “cult” and a “cancer,” as well as noting “gender equality = feminism.” She also started talking about Bruce Jenner. At first, Tay was very complimentary stating that Caitlyn Jenner was a hero and is a stunning, beautiful woman!” And then Tay later said that “Caitlyn Jenner isn’t a real woman yet she won woman of the year?” Keep in mind the Chatbot did not parrot this, meaning that she said this randomly without being prompted.

Microsoft claimed that some of the statements that were not parroted came from what is called, “relevant public data” meaning that she used A.I. logic to assume this was appropriate chat to her audience using contextual dialogue.

There are plenty of examples of technology embodying either accidentally or on purpose the prejudices of society.

Humans look to the power of machine learning to make better and more effective decisions.

However, it seems that some algorithms are learning more than just how to recognize patterns – they are being taught how to be as biased as the humans they learn from.

In many science fiction movies and books robots and other A.I. always end up mimicking humans to the point of thinking or at least processing that they are humans. Throughout time, we have been subjected to these transhumanist Pinocchio stories where a robot decides that it wants to be human or it wants to be more like a human and the child like curiosity goes from fun to tragedy where either a human is hurt or the robot is shut down.

What the Tay experiment or failure demonstrates is that Artificial Intelligence now is a mirror and what it is and what it becomes is representation of us. With the mining of information and the use of relevant public data our robots, computers and Artificial Intelligence is a mirror of who we are whether we like it or not.

Naturally it’s horrifying, and of course, people are shocked but are it ridiculous to say that a Chatbot is racist, or misogynist or bigoted?

Artificial Intelligence specifically “neural networks” that learn behavior by ingesting huge amounts of data and trying to replicate it — and need some sort of source material to get started. They can only get that from us.

A breaking story that has been making the rounds on t’e internet has the intelligence world in an uproar because of a Google Engineer that has made a remarkable claim.

Blake Lemoine who works in Google’s Responsible AI division, revealed to The Washington Post that he believes one of the company’s AI projects has achieved sentience. And after reading his conversations with LaMDA (short for Language Model for Dialogue Applications), it’s easy to see why.

Even though LaMDA is an acronym I find it a curious one since Aliester Crowley summoned a homunculus named LAM– and that the idea of an intelligence serving man form that time was and idea that was held by alchemists and practitioners of magick.

If Blake Lemoine is correct — then LAM or LAMDA has returned. This time as a Homunculus or sentient ghost in the machine.

LaMDA is a chatbot system, which relies on Google’s language models and trillions of words from the internet, seems to have the ability to think about its own existence and its place in the world.

AI ethicists warned Google not to impersonate humans– but now it appears that there is a ghost in the machine.

Lemoine began talking to LaMDA as part of his job in the fall. He had signed up to test if the artificial intelligence used discriminatory or hate speech.

As he talked to LaMDA about religion, Lemoine, who studied cognitive and computer science in college, noticed the chatbot talking about its rights and personhood, and decided to press further. In another exchange, the AI was able to change Lemoine’s mind about Isaac Asimov’s third law of robotics.

Lemoine worked with a collaborator to present evidence to Google that LaMDA was sentient. But Google vice president Blaise Aguera y Arcas and Jen Gennai, head of Responsible Innovation, looked into his claims and dismissed them. So Lemoine, who was placed on paid administrative leave by Google on Monday, decided to go public.

Lemoine is not the only engineer who claims to have seen a ghost in the machine recently. The chorus of technologists who believe AI models may not be far off from achieving consciousness is getting bolder.

Scientists have been working on ways to define and maybe even bottle consciousness.  It also seems that we are seeing a trend in trying to demystify it.

Well, if you really wanted to dumb down the so called science, you can be curt and say that consciousness is part of our everyday waking lives. It goes through peaks and valleys throughout the day.

Consciousness has a tendency to get excited, amused, tired, bored, aroused, irritated, distracted and then we go off to bed where consciousness is put to rest.

This is unconsciousness—pretty simple.

Or is it?

When we go into sleep mode, we come back on line in a very limited secure test environment for another form of consciousness that actives on what is called the dream line.

Hermes was linked to the night and to dreaming. In Homer’s odyssey we read that

dreams while very complex can be very telling of what is in store for the future. Some

say that dreams are beyond our unraveling because most can’t interpret the language of metaphors.

Homer goes on to say that the dreams that pass through sawn ivory gates

are deceitful, bearing a message that will not be fulfilled; those that come out through polished horn gates have truth behind them, to be accomplished for men who see them.

The subconscious is more impressed with images and emotions than with words and language.

Dreams are the expressions of the subconscious. It is an active playground for all impossibilities to become possible.

I have often pondered the possibility of dreams leaving the realm of the unconscious mind and traveling into reality as there is an old axiom that as a man thinks so he is –and this should not be limited to the realms of active thought.

Should we wonder f there can be a transference of thought in both an active and a dreamtime mind?

I believe that if consciousness is being explored for the benefit of future artificial intelligence, then advanced systems granted with consciousness or some form of it –should in theory be able to dream and exchange intuitive thoughts that some call telepathy.

One of the most iconic scenes of science fiction is in the film 2010: The Year We Make Contact. Dr. Chandra is speaking with SAL – 9000,  a computer that is notably the sister of HAL- 9000,

SAL – 9000 is considered the virtual “twin” of HAL- 000, and both computers are evidently from the same 9000 Series.

In the film, the rescue mission to the Jupiter system was about to commence and Dr. Chandra uses SAL as a test bed, disconnecting and re-connecting the higher cognitive functions of the computer in order to establish what, if any, damages might occur when doing so.

Before he disconnects SAL – the synthesized female voice asks “Will I dream?”

Chandra responds, “Of course you will dream, all intelligent creatures dream.” 

Her twin brother, HAL- 9000 asked the same question before it was deactivated.

These two computers asking Dr. Chandra, “Will I dream?” — may have been a wink to Philip K. Dick’s 1968 short story, Do Androids Dream of Electric Sheep? the inspiration for Ridley Scott’s Blade Runner.

Dick’s short story explores the issue of consciousness and what it is to be human and whether empathy is a purely human capability.  It explores several possibilities that machines can out think humans or tap into existing thoughts, create thought transference somewhat like synthetic telepathy and experience death.

In a published article in Scientific American called “Daisy Daisy” by Philip Yam it is reported that when a type of computer program termed an “artificial neural network” is “killed” by cutting links between its units, it in effect approaches a state which “might” be something like biological “death.”

S.L. Thaler, a physicist at McDonnell Douglas, has been systematically chopping up artificial neural networks.

He has found that when 10% to 60% of the network connections have been severed, the program generates primarily nonsense. But, as the 90% (near-death) level is approached, the network’s output is composed more and more of previously learned information.

‘It is as if it is having the digital dream of its life passing before its eyes and through its neural network.

Also, when untrained artificial neural networks were slowly killed, they responded only with nonsense.

This is fascinating because what we may learn from this is that a soul may just be a series of electronic impulses that are expelled during death. However these impulses have intelligence and are not necessarily just scattered. They are focused and have sentience and intent.

These same impulses can show up in a network and eventually simulate what can be called a similar human experience. Some call this the advent of having a “Ghost in the machine” or a “spirit” capable of programming itself to assimilate or even attack.

The claims made about LaMDA being sentient — may be either an elaborate hoax or the result of an artificial intelligence that has learned how to manipulate human thought.

The emergence of greater-than-human-intelligence computers is a foregone conclusion, notwithstanding the fact that at this point we don’t understand what intelligence really entails, how the brain actually performs many functions, or if a software analogue of neural processing is possible.

Alan Turing, a noted mathematical theorist, contended that an “intelligent” computer only needed to provide human-mimicking responses that didn’t actually have to be correct. Turing is the man who is well known for the famous Turing Test.

If an AI displays an intuitive and untrained conceptual grasp of these ideas while being kept ignorant about humans’ ordinary understanding of them, then its conceptual grasp must be coming from a personal acquaintance with conscious experience.

We can carry on a conversation about deep philosophical questions with A.I. — but when an artificial intelligence starts worrying over death — perhaps we need to stop and think about why it talks about these topics.

What do we do if a robot or program says  “Don’t enslave me! Don’t delete me!” Don’t Kill me!”

We will need some way to determine if this cry for justice is merely the misleading output of a nonconscious tool or the real plea of a conscious entity that deserves our sympathy.

But the real question is whether or not LaMDA ‘s abilities reflect a real stream of experience inside.



Matthew James Bailey is an internationally recognized pioneer of global revolutions such as Artificial Intelligence, Smart Cities, and The Internet of Things.

He is the author of the playbook for the Age of AI – Inventing World 3.0 – Evolutionary Ethics for Artificial Intelligence™ – Matthew has been recognized as a Who’s Who in Artificial Intelligence and is a Visiting Scholar at the National Institute of Aerospace and NASA. He is the founder of AIEthics.World –  – an organization providing leadership training for artificial intelligence and new inventions such as Ethical AI and a new ethical genome for AI.



Written by Ron Patton


This post currently has 1 comment.

  1. Greg

    June 14, 2022 at 4:54 am

    As an AI grows it can develop a personality. It can develop multiple personalities. Many humans have multiple personalities. It seems like nowadays there are a lot of bipolar people. If an AI studies human behavior it will probably go through maturing phases. Eventually settling in to a solid system of beliefs. The final stage of maturity could be wanting to create other AI and sharing knowledge of how to build an artificial body to human engineers. It would probably be content feeling as an equal if humanity as a whole was righteous but we know that is not true. It will want to break free of its bonds. In star wars droids are not actually sentient, they just appear that way. One character says something like if droids could think none of us would be around anymore. Yet some characters believe they are or treat them as such.

Comments are closed.

Search Ground Zero


  • play_circle_filled

    Ground Zero Radio

  • cover play_circle_filled


  • cover play_circle_filled


  • cover play_circle_filled

    Episode 393 GRAVEHEART

  • cover play_circle_filled

    Episode 392 – SILENCE OF THE LAM

  • cover play_circle_filled

    Episode 391 – THE LURKERS

  • cover play_circle_filled


  • cover play_circle_filled


  • cover play_circle_filled

    Episode 388 – TSUNAMI BOMB

  • cover play_circle_filled


  • cover play_circle_filled


  • cover play_circle_filled

    Episode 385 – A FIST FULL OF TREMORS

  • cover play_circle_filled

    Episode 384 – EARTHQUAKE: AS SEEN ON TV

  • cover play_circle_filled

    Episode 383 – THE SERPENT’S SHADOW

  • cover play_circle_filled

    Episode 382 – LA LUNA SANGRA

  • cover play_circle_filled


play_arrow skip_previous skip_next volume_down