ChatGPT Doesn’t Elicit a Bedtime Story

The name stands for ‘chat’ and ‘generative pre-trained transformer.’ ChatGPT doesn’t elicit a bedtime story, but it has a tale to tell. 

Launched in November 2022 as an AI chatbot by OpenAI, ChatGPT(-3) is a subset of InstructGPT. What makes it unique is its ability to interact with and impersonate humans.

In early March, GPT-4 was released for paying users and through a public API. Weeks later, OpenAI released plugins with the following statement.

“We’ve implemented initial support for plugins in ChatGPT. Plugins are tools designed specifically for language models with safety as a core principle, and help ChatGPT access up-to-date information, run computations, or use third-party services.”

And now a word from our technology

And Now a Word From Our Technology

The following passage from ChatGPT describes itself, prompted by Interesting Engineering.

“My training data encompasses a wide range of topics, so I can converse on many subjects, including but not limited to science, history, mathematics, and current events. However, I am still just a machine, and while I can generate responses that are similar to what a human might say, I do not have thoughts, feelings, or consciousness.”

One of the more inventive uses of the technology is by DoNotPay, which bills itself as “the world’s first robot lawyer.” It uses the GPT-powered bot to aid consumers in disputes against organizations. These include parking tickets, health care billing, and even marriage annulments.

Is It Real, or Is It Spam?

ChatGPT has its critics. Digital marketing expert Neil Patel occasionally comments on the topic, critical of its predictable and generic descriptions. 

Google Search Advocate John Mueller commented in a story about ChatGPT-3 that he suspects the quality of AI-generated content has slightly improved over the old versions. 

“But for us, it’s still automatically generated content, and that means for us it’s still against the Webmaster Guidelines. So we would consider that to be spam.”

ChatGPT Doesn’t Elicit a Bedtime Story, But Can We Understand, Predict, or Control It?

The most significant reaction came on March 29. 

More than 1,100 tech experts, including Elon Musk, Steve Wozniak, and Tristan Harris of the Center for Humane Technology, signed an online letter calling for “all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4.”

The first paragraph concludes as follows.

“Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”

It later includes a warning of “becoming human-competitive at general tasks” with concerns for accuracy and safety. The letter currently has 26,223 signatures.

ChatGPTs and other AI technologies can create a song, poem, story, or image. They can also be a coding assistant, a workflow manager, a customer service rep, or a teacher. How well can they accomplish these tasks? Only the future knows. 

To quote the late physician and psychologist Edward De Bono, “Without creativity, there would be no progress, and we would be forever repeating the same patterns.”

ChatGPT Doesn’t Elicit a Bedtime Story