Steve Wozniak’s warning: No matter how ‘useful’ ChatGPT is, it can ‘make horrible mistakes’

Steve Wozniak doesn’t entirely trust dog videos on Facebook, self-driving cars or ChatGPT.

On Wednesday, the Apple co-founder made an impromptu appearance on CNBC’s “Squawk Box” to talk about the increasingly popular artificial intelligence chatbot. Wozniak said he finds ChatGPT “pretty impressive” and “useful to humans,” despite his usual aversion to tech that claims to mimic real-life brains.

But skepticism followed the praise. “The trouble is it does good things for us, but it can make horrible mistakes by not knowing what humanness is,” he said.

Wozniak pointed to self-driving cars as a technological development with similar concerns, noting that artificial intelligence can’t currently replace human drivers. “It’s like you’re driving a car, and you know what other cars might be about to do right now, because you know humans,” he said.

By multiple measures, ChatGPT’s artificial intelligence is impressive. It’s learning how to do tasks that can take humans days, weeks or years, like writing movie scripts, news articles or research papers. It can also answer questions on subjects ranging from party planning and parenting to math.

And it’s quickly gaining traction. ChatGPT reached 100 million users after only two months, considerably faster than TikTok, which took nine months to hit the same milestone, according to a UBS report reviewed by Reuters.

ChatGPT’s technology can certainly help humans — by explaining coding languages or constructing a frame for your résumé, for example — even if it doesn’t yet know how to convey “humanness” or “emotions and feelings about subjects,” Wozniak said.

But the platform hasn’t nailed creative projects and isn’t perfectly accurate. When CNBC Make It asked ChatGPT to write a financial blog post on tax-loss harvesting last month, results were mixed — with a lot of additional context needed to actually implement the chatbot’s advice.

Others report that ChatGPT can make blatant mistakes, like failing to solve simple math equations or logic problems.

Its competitors aren’t doing much better. One of Google’s first ads for Bard, the company’s new artificial intelligence chatbot, featured a noticeable inaccuracy earlier this week: Bard claimed the James Webb Space Telescope “took the very first pictures of a planet from outside our own solar system.”

The Webb telescope did take photographs of such planets, called exoplanets, in September. But the actual first photos of exoplanets were taken by the European Southern Observatory’s telescope in 2004, according to NASA’s website.

Alphabet, Google’s parent company, lost $100 billion in market share after the inaccuracy was noticed and publicized.

Wozniak isn’t the only tech billionaire wary of those consequences.

ChatGPT and its parent company, OpenAI, have “stunning” websites — but they’re bound to be corrupted by misinformation as they internalize more information across the internet, serial entrepreneur and investor Mark Cuban told comedian Jon Stewart’s “The Problem with Jon Stewart” podcast in December.

“Twitter and Facebook, to an extent, are democratic within the filters that an Elon [Musk] or [Mark] Zuckerberg or whoever else puts [on them],” Cuban said. “Once these things start taking on a life of their own … the machine itself will have an influence, and it will be difficult for us to define why and how the machine makes the decisions it makes, and who controls the machine.”

Get CNBC’s free Warren Buffett Guide to Investing, which distills the billionaire’s No. 1 best piece of advice for regular investors, do’s and don’ts, and three key investing principles into a clear and simple guidebook.

Sign up now: Get smarter about your money and career with our weekly newsletter

How I started a $110 million car business by age 30
Source: CNBC