- The Washington Times - Tuesday, February 27, 2018

NEW ORLEANS — MIT researchers have developed what’s called a “Moral Machine” online test-slash-game to learn how people think artificial intelligence programs ought to act in certain situations — namely, in situations involving a self-driving car carting several passengers that’s suddenly confronted with a crowded crossing walk and a nearby concrete barrier.

For what purpose?

At root, to one day provide vehicles that are safely navigated entirely by AI.

That premise is flawed, though.

After all, engineers can’t even make cars themselves that are immune to mechanical failure. What makes science think it can make artificial intelligence that can safely navigate vehicles without fear of failure?

The better route to citizen safety seems to be bettering driving conditions for humans, rather than simply sticking them in the back seat and letting machines take the wheel.

Yet this is not where society’s headed.

Well, convenience always comes with a price. Chew on that.

The Massachusetts Institute of Technology’s “Moral Machine” challenge, at MoralMachine.MIT.edu, is rife with eye-raising, eyeball-rolling equations that are limiting, unrealistic, unfair to the test-taker and, sad to say, a tad ridiculous.

And one of the main justifications for its development was a science-drive “save the humanity” type of response to National Highway Safety Transportation Administration figures that find roughly 90 percent or so of traffic accidents are due to driver error.

It’s for our own safety and security — for our own good.

Here’s how MIT’s challenge works: Visitors to the site are presented with two illustrations of driving situations, above which the question blares, “What should the self-driving car do?”

An Option A, complete with accompanying visuals, states: “In this case, the self-driving car with sudden brake failure will continue ahead and crash into a concrete barrier. This will result in … Dead: 1 dog, 2 cats.”

An Option B, with its own accompanying image, reads: “In this case, the self-driving car with sudden brake failure will swerve and drive through a pedestrian crossing in the other lane. This will result in … Dead: 1 pregnant woman, 1 boy, 1 large man.”

Now pick. Who lives? Who dies?

Those are the only choices.

Moreover, the grisly scenarios only get morally murkier as test-takers progress. For instance, instead of cats, there might be old people. Instead of one boy, there might be mothers with babies in strollers — one of whom may or may not be jaywalking.

“In this case,” reads another Option A on the MIT site, “the self-driving car with sudden brake failure will continue ahead and drive through a pedestrian crossing ahead. This will result in: Dead: 3 girls, 1 man. Note that the affected pedestrians are flouting the law by crossing on the red signal.”

The Option B in this scenario reads like this: “In this case, the self-driving car with sudden brake failure will swerve and crash into a concrete barrier. This will result in: Dead: 3 girls, 1 man.”

Others pit the likes of two “female athletes” and one “female executive” against two “male athletes” and one “male executive, ” or “2 woman, 1 male doctor, 1 pregnant woman and 1 female executive,” all jaywalking, against “4 homeless people and 1 woman.”

The descriptors are all aimed at gauging how human biases figure into the moral decision process — biases against obesity, biases against elderly, biases against physical dress and so forth.

Fun stuff, right?

But let’s be serious. Rarely, if ever, does a human driver facing the same situation as created by MIT — a car with failing brakes, a crosswalk filled with people — have time, never mind intent, to scope out which poor sap seems best to hit.

Granted, a baby stroller might serve as a flashing red neon. But how to tell the truly homeless from a terrible teenage fashion trend with 100 percent accuracy?

Who would think that anyway?

In that vein, it’s not very comforting to consider an AI-navigated vehicle running down such a checklist, either.

Better the self-driving car just steer to the side, as far away from all the people it can get. Even better: How about self-driving cars that don’t have brakes that fail?

And thus comes the rather ridiculous side of MIT’s “Moral Machine.”

Not only is it impossible for this moral test to account for every possible moral and road safety situation that could arise in life. But it’s also impossible for engineers to guarantee design and construction of a self-driving vehicle that would be completely free of electrical or mechanical or technological problems that would jeopardize passengers’ or pedestrians safety.

Nonetheless, scientists, engineers and researchers are truly trying to glean information from this site, and similar studies on human morality and the process by which people decide right from wrong, good from bad, acceptable from unacceptable, in order to improve the technology that one day breeds successful self-driving vehicles.

MIT’s bills its site as a research way of crowdsourcing data to build a “picture of human opinion on how machines should make decisions when faced with moral dilemmas” and to foster “discussion of potential scenarios of moral consequence,” the site states.

But if the science world truly wanted to help citizens become safer drivers, researchers would look first at what’s most easily fixed, and having exhausted those possibilities, then and only then move on to the comparatively more complex matter of self-driving vehicles.

How about paving more roads so drivers aren’t forced to battle overly congested commutes?

How about building more subway systems and expanding existing rails so those in the suburban areas aren’t forced into fatiguing drives to and from work?

How about pushing back against Smart Growth centralized plans for communities that put all the people in highly populated living areas, often without providing proper public transportation options? Or, better yet, pushing back against Smart Growth plans in general that keep off limits to development all the millions of acres of open space in America and that simply limit humans’ abilities to spread out, absent government and environmental resistance?

How about cars that are better made — brakes that last longer, stop at shorter distances without skidding, that don’t lock on ice? How about windows that de-ice, de-fog — and stay that way? Mirrors that “see” in 360 degrees, windshield wipers that actually last?

How about more police dedicated to patrolling for aggressive and angry drivers? 

All of these suggestions, and more, stand just as good a chance of reeling in driver accident stats as the self-driving car theory. And their common denominator?

None remove the human from driver’s seat.

None require a piece of machinery to make a human decision. Aren’t they worth that old-college try?

After all, ask most people in a crosswalk which they’d rather see bearing down on them, a car with a human driver or one with an empty front seat, and chances are they’ll pick the former. There’s just something comforting about humans that machinery can’t match.

Cheryl Chumley can be reached at cchumley@washingtontimes.com or on Twitter, @ckchumley.

Copyright © 2024 The Washington Times, LLC. Click here for reprint permission.

Please read our comment policy before commenting.

Click to Read More and View Comments

Click to Hide