- Deseret News - Tuesday, March 10, 2015

The funeral scene played like something out of science fiction.

The bodies, once new and clean and unmarked by the passage of time, were laid out on the temple altar, adorned with tags denoting their family lineage. A Buddhist priest prayed over the bodies as loved ones looked on.

The dead weren’t children or relatives, but to the elderly mourners assembled for the mass funeral earlier this year in the Kofuku-ji temple near Chiba, Japan, the 19 bodies being blessed were family. The “dead” were once Aibo (the Japanese word for “companion”) — robot dogs created by Sony that are wildly popular in Japan. The dogs are especially beloved among the country’s seniors, who make up 25 percent of Japan’s population.

It’s a tableau for how attached society has and will continue to be intertwined with objects built with artificial intelligence — a theme that’s been sci-fi fodder for decades, from Jules Verne to Ridley Scott’s “Blade Runner” to this year’s new release, “Chappie,” the story of a lovable robot that essentially becomes human in a world that debates the consequences of his existence.

To author and filmmaker James Barrat, the fact that robot dogs are laid to rest is a sign of society’s problematic and increasingly personal relationship with artificial intelligence.

“As humans, we anthropomorphize things, and that’s incredibly dangerous when dealing with artificial intelligence,” Mr. Barrat said. “We think that because they can talk to us, they have all the machinery we do behind our eyes. They never will. And we have to be wary of our own desire to make them just like us.”

As more artificial intelligence works its way into everyday life — from Google to Siri — problems with the technology have raised concerns about the future of human control over artificial intelligence. In January, technology moguls and leaders like Bill Gates, Stephen Hawking and Elon Musk attended the Future of Life Institute’s A.I. conference with a plea to change A.I. research priorities to include safety measures as the technology develops — and potentially overtakes — human comprehension.

Mr. Barrat hopes more sci-fi films will spark a serious conversation about the risks of A.I.

“Films about A.I. have inoculated us from taking these questions seriously. We’ve had so much fun with the Terminator and HAL 9000 that when we’re confronted with actual A.I. peril, we laugh it off,” Mr. Barrat said. “In the movies, the humans always win. In real life, that doesn’t always happen.”

The intelligence explosion

The central tension in “Chappie,” which opened last weekend, is that the robot main character develops like a human child would. He learns from his environment and by mimicking his creators. Whether or not his similarities to humans makes him human is the question Chappie’s creators and the people trying to destroy him wrestle with.

The stakes outlined in “Chappie”— that humans must maintain control over the A.I. robots they create to avoid peril — are issues computer science professor Satinder Baveja deals with every day.

Mr. Baveja runs the A.I. lab at the University of Michigan where he’s trying to accomplish the ultimate goal of creating a definitive electronic version of a human mind. Mr. Baveja, like many A.I. scientists, is trying to create a computer that can think and problem-solve and learn from its environment just as humans do as they grown up, but he’s trying to do it responsibly.

“You have to plan for the worst-case scenario,” Mr. Baveja said. “If your entire power grid is automated, for instance, you wouldn’t want the A.I. to make decisions that are contrary to societal values. How you build that into a program is the interesting question.”

The question of controls Mr. Baveja grapples with today are echoes of theories pioneered by computer scientist Alan Turing and statistician I.J. Good in the mid-20th century, although technology is only now catching up to what Messrs. Turing and Good addressed.

Mr. Turing is famous for a 1950 paper in which he outlined his legendary “Imitation Game,” which shares a title with the 2014 biopic of Mr. Turing. Also called the Turing Test, the idea is that one day, machines will be able to mimic human reasoning and intelligence so seamlessly that a judge would not be able to tell the human apart from the machine.

“In the same way a plane doesn’t need to be a bird to fly; a computer doesn’t have to be a brain to think,” Mr. Barrat said.

In 1965, Mr. Good took Mr. Turing’s idea further with a theory called the Intelligence Explosion. Mr. Good believed that if machines could match or surpass human intelligence, they would eventually create more and more advanced machines — essentially, leaving humans in the intellectual dust.

Mr. Baveja says that so far, society has been able to reap great benefits from A.I. that isn’t yet autonomous. But that doesn’t mean people shouldn’t be wary. Mr. Baveja used automated stock trading as an example of how computers execute human abilities much faster in a safe way. Most of these programs have safety measures to prevent automated traders from losing too much money or from hijacking the process, Mr. Baveja said.

“A lot of the A.I. we have right now is technology that gives us advice — like Google searches or Siri. Those technologies need us,” Mr. Baveja said. “With automated systems, you have to think through the entire process. With trading, we’ve anticipated problems and put in fail-safes, but what if we don’t always anticipate it?”

At the rate society creates new technology, Mr. Barrat says the day machines overtake humans could come fast, if safety standards aren’t put in place soon.

“There is a huge economic wind pushing human intelligence in a machine forward because our government knows someone else will develop it if we don’t,” Mr. Barrat said. “We have a window now in which we can make it safer. In 20 years, we won’t have a window anymore.”

Creation and stewardship

Despite doomsday scenarios presented in science fiction, Mr. Baveja says the potential problems A.I. development presents are inherent in all kinds of scientific progress. He’s optimistic that society will adopt standards of A.I. safety as problems emerge.

“Right now, this technology acts as sensors with humans making the decisions,” Mr. Baveja said. “That could of course be taken out of human hands, and it’s up to us to weigh the costs and benefits. I don’t think that’s unique to A.I.”

He theorized that, like the Aibo funeral in Japan, humans will continue to become attached to their A.I. devices and eventually, fight for them in a similar way to the animal rights movement — or like Chappie’s creators fight for his right to be his own person.

“We’ll build pretty capable creatures, and I can see people in the streets demanding their rights,” Mr. Baveja said. “I think they’ll land somewhere between a pet and a servant, kind of.”

While Mr. Barrat isn’t overtly optimistic about the future of A.I., he agrees that the solution isn’t to fear the technology itself, but the institutions who may wind up controlling it.

Mr. Barrat hopes society will learn from the last time unbridled technology was developed without any legal regulation: nuclear fission.

“Our innovations have always run way ahead of our stewardship. Like A.I., the first days of nuclear fission were full of promises about benefits and the public finally learned about it at Hiroshima,” Mr. Barrat said. “We as a species held a gun to our own heads and came close to coming extinct because of failure to manage this technology.”

Copyright © 2024 The Washington Times, LLC. Click here for reprint permission.

Please read our comment policy before commenting.

Click to Read More and View Comments

Click to Hide