- Tuesday, February 14, 2023

OpenAI’s third edition of ChatGPT is impressive — and frightening.

It can write authoritative-sounding scholarly papers, computer code and poetry. It can solve math problems.

The contraption managed to pass graduate-level exams in law and business at the universities of Minnesota and Pennsylvania. It’s been hooked up to the email program of a dyslexic businessman to assist with clearer communications that helped him land new business.

All of that summons the usual fears: Is AI coming for my job? How will we protect our youth from the moral hazards of plagiarism?
The answers are yes if your work is fairly structured or regulated. And OpenAI is working on a watermark to identify computer-generated text.

Creating tools that can help lawyers draft briefs and programmers write code more quickly, automate aspects of white-collar and managerial critical thinking, and assist with elements of creative processes offers huge opportunities.

Microsoft is investing $10 billion in OpenAI, and Google is plowing cash into the ChatGPT rival Anthropic.

Recalling the early days of the internet, when attaching .com to a stock’s name or prospectus could boost its value, BuzzFeed’s stock jumped 200% by announcing it would use AI to generate core content.

Remember, for every Amazon.com, there was a Pets.com — established in 1998, ceased operations in 2020.

ChatGPT is a large language model (LLM) that answers questions by indexing the internet and finding patterns through trial and error. It makes mistakes, but it learns as it goes along, is tutored by humans and should become less error-prone through use.

What it seems best at is throwing back the conventional wisdom. When asked for a market-beating stock portfolio, it basically threw back that you can’t beat the market.

But ChatGPT isn’t prescient and will require human supervision for any application where mistakes could cause harm — emotional, financial or physical.

Software engineers may be able use it for first drafts of long, complex programs — or modules inside larger projects — but I doubt that Boeing will put AI-generated code into its navigation systems without close human engagement.

In sum, ChatGPT will become another tool to help people accomplish yet more and bigger tasks more quickly and replace or reduce the number of people in more mundane, less-satisfying activities — like associates in law firms engaged in document review and at consultancies drafting research briefings.

Like the robot, AI will free people up for more sophisticated work.

ChatGPT may write a song that captures the style of Taylor Swift and, coupled with voice simulation software, could fool listeners, but I doubt it will match her biggest hits. The knack of true superstars is to finger an emerging cultural nerve.

If an AI program can mimic the digital art of Beeple, then lots of lesser stars in the arts and entertainment world are in trouble. Much of what they do is repetitional.

So much of what we write, think and do is not mechanical or fit to form but rather judgmental and value-laden — we continuously choose among strategies, opportunities, products and people to engage.

A lot of material on the web is prejudiced and unchecked. LLMs will generate racist, sexist and other prejudicial assessments. After all, what people believed in the 1850s and 1950s is on the web, along with modern thinking. So are the generalizations of critical race theory and the 1619 Project.

Filters can be installed on AI programs for the most obvious offenses, but whose filters?

Most successful people are, in some measure, moderate in disposition and earnestly wrestle with trade-offs among equity and merit, loyalty and competence when allocating scarce resources and opportunities. Mostly, it comes down to individual internal algorithms and assessments of risk.

That’s where the danger lies. What we think and how we act is the sum of all that has been poured into us through childhood, education, experiences and the ruminations of philosophizing cab drivers — and these days, what we can find on the web.

Our personalities may be revealed by where we go on the internet, physically travel and put into professional products, personal choices and chains of words such as in emails.

How much you value your spouse’s happiness vs. your own is buried in your behavior and stored on your hard drive, hand-held device and electronic personal assistant — and is divinable.

We will need AI to compete in just about every professional line, and programs like ChatGPT must be released to mine that information to obtain the results we need to succeed.

Knowing how Google, Microsoft, Facebook and others provide software and exploit what they learn about us, our private actions and prejudices will become implicitly available to others using applications like ChatGPT.

That raises opportunities for praise and the danger of censure.

Those are terrible threats to privacy, freedom of thought, and our souls.

• Peter Morici is an economist and emeritus business professor at the University of Maryland, and a national columnist.

Copyright © 2024 The Washington Times, LLC. Click here for reprint permission.

Please read our comment policy before commenting.

Click to Read More and View Comments

Click to Hide