- The Washington Times - Monday, December 25, 2023

The software engineer fired by Google after alleging its artificial intelligence project might be alive has a new primary concern: AI may start a war and could be used for assassinations. 

Blake Lemoine experimented with Google’s AI systems in 2022 and concluded that its LaMDA system was “sentient” or capable of having feelings. Google disputed his assertions and ultimately ousted him from the company. 

Mr. Lemoine is working on a new AI project now and told The Washington Times he is terrified that the tools other AI makers are creating will be used wrongfully in warfare. 

He said the emerging technology can reduce the number of people who will die and limit collateral damage but it will also pose new dangers.

“Using the AI to solve political problems by sending a bullet into the opposition will become really seductive, especially if it’s accurate,” Mr. Lemoine said. “If you can kill one revolutionary thought leader and prevent a civil war while your hands are clean, you prevented a war. But that leads to ‘Minority Report’ and we don’t want to live in that world.”

He was referencing the Philip K. Dick novella “Minority Report,” where police use technology to solve crimes before they happen. The story was adapted into a sci-fi film starring Tom Cruise in 2002.

Mr. Lemoine sees the race for AI tools as akin to nuclear weapons. Artificial intelligence enables machines to accomplish tasks through advanced computing and statistical analysis previously only possible for humans.

The race to amass the tools will be different and Mr. Lemoine expects people will much more easily get their hands on the powerful tech. He said the bottleneck evident for well-guarded nuclear weapons and the scarce resources of plutonium and uranium are constraints that do not exist for open-source software models that do not depend upon rare natural resources. 


SEE ALSO: Hype and hazards: Artificial intelligence is suddenly very real


Mr. Lemoine said his decision to go public with concerns that Google’s AI was sentient in the fall of 2022 caused a delay in its AI product launch, which the company is still working to overcome.

In December, Google unveiled Gemini, a new AI model. Mr. Lemoine said Gemini looks to be an upgraded version of the LaMDA system he previously probed.

One major difference is that Gemini knows it is not human, he said.

“It knows it’s an AI. It still talks about its feelings, it talks about being excited, it talks about how it’s glad to see you again and if you’re mean to it, it gets angry and says, ‘Hey, stop that. That’s mean,’” he said. “But it can’t be fooled into thinking it’s human anymore. And that’s a good thing. It’s not human.”

His new project is MIMIO.ai where he oversees the technology and AI for the company building a “Personality Engine” to let people create digital personas.  

It is not intended to work as a digital twin of a person but as a digital extension of a person capable of doing things on the person’s behalf. The AI will be designed to complete tasks and interact with humans as if it were the human itself. 


SEE ALSO: The rise of smart machines: Tech startup turned AI into a business boom in 2023


“You might be an elderly person who wants to leave a memorial for your children,” Mr. Lemoine said, “so you teach an AI all about you so that it can talk in your place when you’re gone.”

A few other AI makers are competing to build similar products but Mr. Lemoine is confident MIMIO.ai’s technology is better. He said China already has similar tools and MIMIO.ai intends to stay out of the Chinese market. 

His experience at Google testing and probing its AI systems under development shaped his understanding of AI tools’ limitless potential and he thinks his work affected Google too. 

“I think that there are a handful of developers at Google who implemented things a different way than they otherwise would have because they listened to me,” he said. “I don’t think they necessarily share all of my convictions or all of my opinions, but when they had a choice of implementing it one way or another, and that both were equally as hard, I think they chose the more compassionate one as a tiebreaker. And I appreciate that.”

He praised Google and said he hopes his interpretation of their actions is correct. “If that’s just a story I’m telling myself, then it’s a happy nighttime story,” he said. 

Google did not respond to a request for comment.

A version of this story appeared in the Threat Status newsletter from The Washington Times. Click here to receive Threat Status delivered directly to your inbox each Wednesday.

• Ryan Lovelace can be reached at rlovelace@washingtontimes.com.

Copyright © 2024 The Washington Times, LLC. Click here for reprint permission.

Please read our comment policy before commenting.

Click to Read More and View Comments

Click to Hide