Warning to Investors: ChatGPT Isn’t What You Think It Is

Steven Schwartz might’ve put his career in jeopardy because he used ChatGPT…

You see, Schwartz is a New York City lawyer. And earlier this year, he had a research emergency…

Schwartz needed to find federal precedents to help him with a case. But a billing snafu blocked those types of cases from his law firm’s account with a legal-research service.

Schwartz didn’t have time to fix the firm’s subscription. So he turned to ChatGPT.

Like many folks, Schwartz didn’t fully understand how the generative artificial intelligence (“AI”) platform worked. But he had heard great things about it in the news.

And ChatGPT did exactly what Schwartz needed. It gave him citations, summaries, and even printouts of eight precedents.

But there was a problem…

None of the cases were real. ChatGPT made everything up.

U.S. District Judge P. Kevin Castel wasn’t happy. He even asked if Schwartz could agree in hindsight that one of the made-up cases was “legal gibberish.”

Castel is now considering whether to impose financial penalties on Schwartz. He might also refer Schwartz to authorities that can suspend or disbar him.

And it gets worse…

Two other federal judges reacted immediately to Schwartz’s situation. They issued standing orders to limit the use of ChatGPT in their courtrooms. And others will likely follow.

Outside the court system, ChatGPT is making up stories as well…

Last month, it incorrectly told a college professor that his entire class had used AI to write their papers. It also falsely accused another college professor of sexual misconduct.

Worst of all… ChatGPT is just doing what it’s supposed to do.

Generative AI platforms like ChatGPT aren’t search engines. They learn as more humans use them.

First, developers establish the logic. Then, they expose the AI model to a lot of digitized text to pre-train it. This data covers all sorts of different topics – including legal cases.

The goal is to get the AI model to learn and start predicting what comes next.

At some point, the developers freeze the AI model. No new logic or facts come in. Then, human reviewers follow certain guidelines to test the answers it produces.

Here’s the kicker…

Sam Altman, the CEO of OpenAI (the privately owned firm behind ChatGPT), could’ve kept developing his AI model in secret. And eventually, he could’ve released everything at once.

But Altman didn’t think that would go well. So instead, he pursued “iterative deployment.” Specifically, as he told Congress last month…

A big part of our strategy is, while these systems are still relatively weak and deeply imperfect, to find ways to get people to have experience with them, to have contact with reality, and to figure out what we need to do to make it safer and better.

ChatGPT does include a fine-print disclaimer at the bottom of the page. It warns that the tool “may produce inaccurate information about people, places, or facts.”

And when a person first logs on to ChatGPT, they now see the following prompt…

Our goal is to get external feedback in order to improve our systems and make them safer.

While we have safeguards in place, the system may occasionally generate incorrect or misleading information and produce offensive or biased content. It is not intended to give advice.

But is that enough?

It’s one thing to warn that it “may occasionally” get something wrong. It’s another matter altogether to make users understand the platform isn’t even designed to get things right.

For now, our takeaway is simple…

A lot of people are excited about ChatGPT right now. And that’s leading to an influx of attention for all things AI.

Just about every CEO who can stick an AI label on their company is doing it.

But today, ChatGPT is a barely a beta test.

So if nothing else, think twice before you make any investment decisions involving AI. Even the CEO who designed the go-to platform knows it’s “still relatively weak and deeply imperfect.”

Good investing,

Marc Gerstein

Scroll to Top