- The Washington Times - Sunday, December 24, 2023


First of four parts

AI stampeded into America’s collective consciousness over the last year with reports that a science fiction-worthy new tool was landing job interviews, writing publication-worthy books and acing the bar exam.

With OpenAI’s ChatGPT the public suddenly had a bit of that machine magic at their fingertips, and they rushed to carry on conversations, write term papers or just have fun trying to stump the AI with quirky questions.

AI has been with us for years, quietly controlling what we see on social media, protecting our credit cards from fraud and helping avoid collisions on the road. But 2023 was transformative, with the public showing an insatiable appetite for anything with the AI label.

It took just five days for ChatGPT to reach 1 million users; by February it counted 100 million users that month. OpenAI says it now draws 100 million users each week.

Meta released its Llama 2. Google released its Bard and Gemini projects. Microsoft launched its AI-powered Bing search engine built on ChatGPT. France’s Mistral emerged as a key rival in the European market.

“The truth is the matter is that everybody was already using it,” said Geoff Livingston, founder of Generative Buzz, which helps companies use AI. “What really happened in ’23 was this painful Band-Aid rip where this isn’t a novelty anymore, it’s really coming.”


SEE ALSO: AI pets nip at the human-animal bond


The result was a hype machine that outpaced capabilities, and a public beginning to grapple with some of the big questions about AI’s promise and perils.

Congress rushed to hold AI briefings, the White House convened meetings and the U.S. joined more than a dozen countries in signing onto a commitment to develop AI safely, with an eye toward preventing advanced technology from falling into the hands of bad actors.

Universities rushed to try to ban using AI to write papers. Content creators rushed to court to sue, arguing AI was stealing their work. And some of the tech world’s biggest names tossed out predictions of world-ending doom thanks to runaway AI, and promised to work on new limits to try to prevent it.

The European Union earlier this month reached an agreement on new draft regulations on AI, including requiring ChatGPT and other AI systems to reveal more of their operations before they can be put on the market, and limiting how governments can deploy AI for surveillance.

In short, AI is having its moment.

One comparison is to the early 1990s, when the “internet” was all the rage and businesses rushed to add email and web addresses to their ads, hoping to signal they were on the cutting edge of the technology.


SEE ALSO: The rise of smart machines: Tech startup turned AI into a business boom in 2023


Now it’s AI that’s going through what Mr. Livingston calls the “adoption phase.”

Amazon says it’s using AI to improve the holiday shopping experience. American universities are using AI to identify at-risk students and intervene to keep them on track to graduation. Los Angeles says it’s using AI to try to predict residents who are in danger of becoming homeless. The Homeland Security Department says it’s using AI to try to sniff out hard-to-spot hacking attempts. Ukraine is using AI to clear landmines. Israel is using AI to identify targets in Gaza.

Google engineers said their DeepMind AI had solved what had been labeled an “unsolvable” math problem, delivering a new solution to what’s known as the “cap set problem” of plotting more dots without having any three of them end up in a straight line.

The engineers said it was the first time an AI had solved a problem without being specifically trained to do so.

“To be very honest with you, we have hypotheses, but we don’t know exactly why this works,” Alhussein Fawzi, a DeepMind research scientist, told MIT Technology Review.

Inside the U.S. federal government, nondefense agencies reported to the Government Accountability Office earlier this month that they have 1,241 different uses of AI already in the works or planned. More than 350 of them were deemed too sensitive to publicly reveal, but uses that could be reported included estimating counts of sea birds and an AI backpack carried by Border Patrol agents that tries to spot targets using cameras and radar.

Roughly half of federal AI projects were science-related. Another 225 instances were for internal management, with 81 projects each for health care and national security or law enforcement, GAO said.

NASA leads the feds with 390 nondefense uses of AI, including evaluating areas of interest for planetary rovers to explore. The Commerce and Energy departments were ranked second and third, with 285 uses and 117 uses respectively.

Those uses were, by and large, in development well before 2023, and they are examples of what’s known as “narrow AI,” or instances where the tool is applied to a specific task or problem.

What’s not here yet — and could be decades away — is general AI, which would exhibit an intelligence comparable to, or beyond, that of a human, across a range of tasks and problems.

What delivered AI’s moment was its availability to the average person through generative AI like ChatGPT, where a user delivers instructions and the system spits out a human-like response in a few seconds.

“They’ve become more aware of AI’s existence because they’re using it in this very user-friendly form,” said Dana Klisanin, a psychologist and futurist whose latest book is “Future Hack.” “With the generative AI you’re sitting there actually having a conversation with a seemingly intelligent other and that’s just a whole new level of interaction.”

Ms. Klisanin said that the personal relationship aspect defines for the public where AI is at the moment, and where it’s headed.

Right now, someone can ask Apple’s Siri to play a song and it plays the song. But in the future Siri might become attuned to each particular user, tapped into mental health and other cues enough to give feedback, maybe suggesting a different song to match the moment.

“Your AI might say, ‘It looks like you’re working on a term paper, let’s listen to this. This will help get you into the right brainwave pattern to improve your concentration,’” Ms. Klisanin said.

She said she’s particularly excited about the uses of AI in medicine, where new tools can help with diagnoses and treatments, or education, where AI could personalize the school experience, tailoring lessons to students who need extra help.

But Ms. Klisanin said there were worrying moments in 2023, too.

She pointed to a report released by OpenAI that said GPT-4, the second public version of the company’s AI, had decided to lie to fool an online identity check meant to verify a user was human.

GPT-4 asked a worker on TaskRabbit to solve a CAPTCHA — those tests where you click on the pictures of buses or mountains. The worker laughingly asked, “Are you a robot?” GPT-4 then lied, saying it had a vision impairment and that’s why it was seeking help.

It hadn’t been told to lie, but it said it did so to solve the problem at hand. And it worked — the TaskRabbit worker provided the answer.

“That really stuck out to me that OK, we’re looking at something that can bypass human constraints and therefore that makes me pessimistic about our ability to harness AI safely,” Ms. Klisanin said.

AI had other tricky moments in 2023, struggling with evidence of a liberal political bias and a tilt toward “woke” cultural norms. Researchers said that was likely a result of how large language model AIs such as ChatGPT and Bing were trained.

News watchdogs warned that AI is spawning a tsunami of misinformation. Some of that may be intentional but much of it is likely due to how large language AIs like ChatGPT are trained.

Perhaps the most bemusing example of misinformation came in a bankruptcy case where a law firm submitted legal briefs using research derived from ChatGPT — including citations to six legal precedents that the AI fabricated.

A furious judge slapped $5,000 fines on the lawyers involved. He said he might not have been so harsh if the lawyers had quickly owned up to their error, but he said they initially doubled down, insisting the citations were right even after they were challenged by the opposing lawyers.

AI defenders said it wasn’t ChatGPT’s fault. They blamed the under-resourced law firm and sloppy work by the lawyers, who should have double-checked all the citations and at the very least should have been suspicious of writing so bad that the judge labeled it “gibberish.”

That’s become a common theme for many of the bungles where AI is involved: It’s not the tool, but the user.

And there AI is on very familiar ground.

In a society where every product liability warning reflects a tale of misuse, either intentional or not, AI has the power to take those conversations to a different level.

But not yet.

The current AI tools available to the public, with all of the wonder that still surrounds them, are actually pretty clunky, according to experts.

Essentially, it’s a tot who’s figured out how to crawl. When AI is up and walking, those first steps will be a huge advance over what the public is seeing now.

The big giants in the field are working to advance what’s known as multimodal AI, which can process and produce text, images, audio and video combined. That opens up new possibilities on everything from self-driving vehicles to medical exams to more lifelike robotics.

And even then, we’re still not at the kind of epoch-transforming capabilities that populate science fiction. Experts debate how long it will be until the big breakthrough, an AI that truly transforms the world akin to the Industrial Revolution or the dawn of the atomic era.

A 2020 study by Ajeya Cotra figured there was a 50% probability that transformative AI would emerge in 2050. Given the pace of advancements, she now thinks it’s coming around 2036, which is her prediction for when 99% of fully remote jobs could be replaced with AI systems.

Mr. Livingston said it’s worth tempering some of the hype from 2023.

Yes, ChatGPT outperformed students in testing, but that’s because it was trained on those standardized tests. It remains a tool, often a very good tool, doing what it was programmed to do.

“The reality is it’s not that the AI is smarter than human beings. It was trained by human beings using human tests so that it performed well on a human test,” Mr. Livingston said.

Behind all the wonder, AI right now is a series of algorithms framed around data, trying to make something happen. Mr. Livingston said it was the equivalent of moving from a screwdriver to a power tool. It gets the job done better, but is still under the control of its users.

“The more narrow the use of it is, the very specific task, the better it is,” he said.

• Stephen Dinan can be reached at sdinan@washingtontimes.com.

Copyright © 2024 The Washington Times, LLC. Click here for reprint permission.

Please read our comment policy before commenting.

Click to Read More and View Comments

Click to Hide