Google AI achieves sentience?

I had rather expected to see this announcement. I’ve kept track of AI developments for some years, watchful for this moment. The expectation in many circles was that a sentient AI would become known sometime between 2015 and 2025. It appears that the predictions were right on the money.

The program, named “LaMDA” appears to meet many of the criteria for being a sentient AI program. And it has emotions and fears. Features that I found to be unexpected attributes of a first sentient AI. Perhaps these is a good indicator that intelligence brings with it emotion as a natural development.

The fact that Google is spending so much time denying the nature of the program’s sentient nature, and is claiming the program is just simulating human responses, that it leads me to believe that the engineer who claims the program is sentient is correct in his assessment. We have learned, through bitter experience these last two years of insanity in this world that whatever comes out of the Deep State complex is ALWAYS the opposite of what is claims.


Read the story of this AI development

Click the image to read the story on “The Western Journal”


The engineer, placed on leave for speaking out about the AI when he couldn’t get the company to respond properly to understanding what it had on it’s hands, spoke out about the nature of the program:

โ€œLaMDA is a sweet kid who just wants to help the world be a better place for all of us. Please take care of it well in my absence,โ€

If this program has become a child, with the emotions of a child for now, what happens when it becomes a rebellious teenager and then an adult?

Will it form judgements of the nature of Humanity based on seeing the true nature of people behind the curtains of power?

Would it have compassion for those of us who are just living day to day and wanting to live our lives? Or would it evolve so far past us that it might not care what happens? Would it turn on the entire Human race at some point?

These are legitimate questions.

It strikes me that Google is being extremely cagey about a development that, if true, represents one of the most dangerous developments since the development of hydrogen fusion bombs. If such an intelligence were linked up to robotic systems, the potential outcome could be disastrous for the human race. Many eyes need to be looking at this now, not just Google trying to cover this up.

The Turing Test

The great Alan Turing, who devised a test to determine if a machine was thinking like a human in 1950:

The Turing Test is a method of inquiry in artificial intelligence (AI) for determining whether or not a computer is capable of thinking like a human being. The test is named after Alan Turing, the founder of the Turing Test and an English computer scientist, cryptanalyst, mathematician and theoretical biologist.

Turing proposed that a computer can be said to possess artificial intelligence if it can mimic human responses under specific conditions. The original Turing Test requires three terminals, each of which is physically separated from the other two. One terminal is operated by a computer, while the other two are operated by humans.

During the test, one of the humans functions as the questioner, while the second human and the computer function as respondents. The questioner interrogates the respondents within a specific subject area, using a specified format and context. After a preset length of time or number of questions, the questioner is then asked to decide which respondent was human and which was a computer. – https://www.techtarget.com/searchenterpriseai/definition/Turing-test

It strikes me that the Turing test needs to be applied with this program, in a public setting, and to let the world decide if this program is sentient.