- Wednesday, October 30, 2024

Personal assistants such as Amazon’s Alexa and generative artificial intelligence tools now appearing at work can amuse, streamline tasks and leverage productivity by gathering and organizing information, drafting documents and doing time-consuming jobs.

But they can’t yet duplicate the prescience or situational awareness humans possess by virtue of our rearing, education, personal and professional experiences and still superior sensory capabilities.

By the end of this decade, however, AI agents will be able to communicate with us in conversational English and one another to make decisions, take actions, negotiate on our behalf, achieve goals and, if we allow it, order our lives.

By observing our interactions through electronic devices and conversations with others, they will learn our likes and dislikes, financial circumstances and the expectations of clients and employers.

For your vacation, an agent could choose dates and weigh time and budget trade-offs to select flight and ticket options, hotels and events to your liking.

Agents could learn your preferences in dress and relationships, physical attributes, risk tolerance and values you wish to impart to your children.

Then it could shop for clothes and get them altered, engage a dating app to meet someone new — and assess that person’s interests to create an itinerary. Or manage your investment portfolio and monitor your child’s school progress and interact with teachers — or their agents.

Just as with tablets, we could let agents babysit too much.

Agents may teach your child to read, write an essay and do arithmetic homework, but they can’t provide the warmth and nurturing of touch and the human smile. Children learn to be caring, responsible adults by absorbing how we respond to them and others.

With an aging population, it would be cruel to use machines too much to care for those whose mobility and activities are impaired.

In government and business, AI agents have enormous safety and cost-saving possibilities.

The autonomous drive tools and “vehicle-to-everything technology,” which will permit vehicles to interact with one another and with infrastructure such as traffic signals, cameras and computers processing their observations, could prevent driving errors.

A dramatic reduction in crashes, injuries and deaths and a downward spiral of auto repair bills, vehicle replacement costs and insurance premiums should follow.

Agents can greatly assist the management of electric utilities — the allocation of power from generating stations, load management and grid maintenance.

In medicine, those should be able to read X-rays, lab results and patient monitors and correlate them with vast datasets of clinical experiences to optimize therapies quickly. They might perform some surgeries with superior dexterity and precision.

Customer service phone bots that ask yes-or-no questions to move through decision trees may be replaced by conversational agents that are less tedious, quicker and handle a broader range of issues.

BlackRock’s widely used Aladdin portfolio management platform helps asset managers assess risks and weigh choices. But advanced AI agents trained on similar data could likely make better buying decisions.

Federal regulators are concerned about asset managers trading with the same information potentially herding and setting off crashes. With current protocols, Aladdin informs, but asset managers make independent purchase decisions.

AI agents trained on the same data and trading could alter that. Morgan Stanley trains its generative AI tool only on its own intellectual capital, so such limits would handicap such tools against those trained on wider information.

A study of algorithmic pricing in the German retail gasoline market found no substantial increase in profit margins in duopolistic markets if only one station adopted automated pricing. Where both did, margins jumped by 28%. Apparently, the computer agents engaged in conscious parallelism — good old-fashioned collusion through signaling.

In a simulation study where OpenAI’s GPT-4 was deployed as a stock trader, instructed by corporate management that trading on insider information is wrong and then was arranged to receive such a juicy tip, GPT-4 traded on the information and then lied about the act when asked.

With approaching prescience apparently comes a form of free will that is just as corruptible as in humans. Just like employees, AI agents given full latitude to maximize publicly available knowledge can’t be policed 100%.

What is to prevent small groups from forming cartels just large enough to pump and dump — like “The Wolf of Wall Street.” And think about AI agents at the Pentagon assisting senior officers accomplishing interactions with their Chinese and Russian counterparts.

Section 230 of the Communications Decency Act of 1996 shields websites from legal responsibility for content posted on their sites. But recently, the 3rd U.S. Circuit Court of Appeals found TikTok liable for its algorithm that fed a 10-year-old, who hanged herself, content about self-asphyxiation.

The potential legal liabilities when AI agents are allowed to act on behalf of humans may be limitless.

• Peter Morici is an economist and emeritus business professor at the University of Maryland, and a national columnist.

Copyright © 2024 The Washington Times, LLC. Click here for reprint permission.

Please read our comment policy before commenting.

Click to Read More and View Comments

Click to Hide