The ChatGPT Maelstrom



Original Source Here

The ChatGPT Maelstrom

by Jean-Louis Gassée

ChatGPT has quickly captured imaginations in ways not seen since the iPhone’s 2007 advent. Today, we take a tentative, random walk thru its rapidly expanding landscape.

As I happily settle back at my writing station after a trip to parts of France and Austria, I see a surfeit of tantalizing topics in my beloved tech world: Elon Musk’s business and legal tribulations; unhinged rumors about Apple Cars and Augmented Reality headsets; tech industry layoffs and the smug recounting of previous bloodlettings (“Why, back in the Big Downsize of twenty-aught-four…”) The list goes on.

Today, we’ll take a walk through the new world of possibilities arising from the sudden emergence of ChatGPT, the Machine Learner in the Cloud that often “speaks” like an immensely knowledgeable assistant, teacher, oracle, therapist, or legal analyst.

In a December Monday Note titled ChatGPT: Its Nothing, You Don’t Need It. And We’ll Have It In Six Months, I drew a parallel between ChatGPT and the emergence of Personal Computers in the late seventies: Until the respective technologies were popularized, lay people (aka The Rest Of Us) saw them as mysterious domains whose practitioners spoke in tongues. Before ChatGPT, the field of Deep Learning had been populated by eggheads unwilling to explain their invention. (We’ll note in passing how impenetrable Wikipedia’s articles on ChatGPT and Deep Learning are.)

By contrast, ChatGPT was easy to use, immensely powerful and often great fun…and available to anyone with a smartphone or PC. Immediately upon its introduction in November, last year, explorations exploded, ranging from education to medicine, from versification to lobbying, programming, memo writing, screenplay development, and more. If you haven’t tried it, you can sign up here.

Inevitably, critics tempered our enthusiasm. I won’t dwell on their amorphous concerns that the technology will take humans’ jobs, or the hasty defense: no, it won’t. We get this with every new technology. History tells us that transition pains are absorbed, that while some jobs are lost, others are created.

More specific questions arise. For example: How does ChatGPT impact education? Previously, a teacher could use plagiarism detection software to verify the original quality of a student’s work. Currently, unfortunately, ChatGPT passes the plagiarism test.

Short of ringing the knell for the college essay., many educators see that there’s no turning back and are willing to adapt. They shouldn’t ban ChatGPT but teach with it, that the potential outweighs the risk. A Wharton School professor issued these guidelines for his class:

(On a deeper level, the Wharton School site features a long, detailed Artificial Intelligence Risk & Governance discussion. Out of curiosity, I asked ChatGPT to summarize the 7K word exposé. This got me an error message — and the obliteration of my earlier conversations.)

Switching topics, we turn to a NY Times essay menacingly titled How ChatGPT Hijacks Democracy. This human thinks the authors’ thesis can be summarized by the following quote [as always, edits and emphasis mine]:

“… for all the consternation over the potential for humans to be replaced by machines in formats like poetry and sitcom scripts, a far greater threat looms: artificial intelligence replacing humans in the democratic processes — not through voting, but through lobbying.”

I asked ChatGPT to offer its own summary of the essay’s 1.1K words. The machine in the Cloud didn’t choke this time and obliged with the following:

“ChatGPT, a chatbot created by OpenAI, has the potential to replace human work in drafting everyday communications such as emails, essays, and other forms of writing. However, a greater concern is the potential for AI to replace humans in democratic processes, not through voting but through lobbying. ChatGPT could compose comments submitted in regulatory processes, write letters to the editor, and comment on news articles, blog entries, and social media posts. This could mimic the work of the Russian Internet Research Agency in attempting to influence elections, but with a much smaller budget and fewer employees. A concern is the ability for AI to understand and target actors within a network, creating a tool for AI hacking and exploiting vulnerabilities in social, economic, and political systems with speed and scope. Legislative systems would be a particularly vulnerable target because of the data available and the difficulty in detecting AI’s use in guiding human actors.”

A more than decent effort (that brought to mind a six-decade old dread of college assignments that demanded text summaries to help develop comprehension and expression).

More broadly, we can picture ChatGPT analyzing contracts and agreements, such as the insanely complex and marginally honest Terms and Conditions we click through without reading, let alone comprehending. Other “ethical” uses: ChatGPT could summarize incoming email and propose responses. Businesses could replace the infuriatingly obtuse chatbots that are deployed as a first line of defense against pesky customers — and the other way around: Days ago, we were treated to a lighthearted demo in which ChatGPT negotiated a discount from Comcast. We can, and likely will, go on and on…

But there are several flies in the ointment.

To start with, as many have noted, ChatGPT makes mistakes. I played early Apple trivia and got wrong answers on arcana such as the author of an interesting language called GraForth. ChatGPT initially claimed no such language existed. When challenged and offered the author’s name, Paul Lutus, ChatGPT politely apologized. Strangely enough, ChatGPT didn’t seem to learn from its mistake: It happened again a few hours later. ChatGPT also failed twice to recall that Steve Jobs called Mobile Me “Exchange for the rest of us”, a declaration that’s easily accessible from Apple’s Newsroom archive.

This leads one to wonder about the programming code ChatGPT willingly generates — with helpful explanations. What about bugs? As a coder, will ChatGPT be more or less reliable than us humans?

Second, money. Today’s ChatGPT is free, but for how long?

ChatGPT is hosted by OpenAI, a company that was started in 2015 as a non-profit by a group of investors led by Sam Altman and that once included Elon Musk. In 2019, OpenAI transitioned to a “capped-profit” operation in which investors can make no more than 100x their investment. Anything above that threshold goes to the company. At the same time, Microsoft announced a $1B “investment package” in OpenAI and OpenAI disclosed its intent to commercially license its technologies.

A few days ago, Microsoft announced it “had been in talks” to invest $10B in OpenAI, with terms that are unusually complicated. To start with, a yet to be understood fraction of Microsoft’s investment might be “in kind” in the form of Azure Cloud Computing capacity. Second, Microsoft would get 75% of OpenAI’s profits until it recoups its investment and ends with a 49% stake, along with another 49% for other investors at a $29B valuation. As I found the deal terms a little complicated, I asked ChatGPT to summarize the relevant two paragraphs in the Semafor article and failed to get anything useable. This is, to quote the Semafor article, “unusual for venture deals”. No further news as of this writing. Sooner or later, we are sure to see a flurry of original and rewritten stories — and wonder who/what wrote them.

Meanwhile, there’s no news from Google and others who seem to have been caught flat-footed by ChatGPT and its supposed Microsoft alliance. Nor do we know how OpenAI would monetize ChatGPT and other products such as Dall-E. And, still on the unknowns, we’ll have to see how a new version of ChatGPT based on the much bigger GPT 4 language model called will impact the field.

An exciting start for 2023, and fodder for more Monday Notes.

jlg@gassee.com

AI/ML

Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot

%d bloggers like this: