Skip to main content

Since ChatGPT’s release by OpenAI last month, the text-based artificial intelligence tool has become widely popular. It represents the most recent advancement in generative AI, which has attracted billions of dollars in funding from tech investors. Bern Elliot, a vice president at Gartner, described ChatGPT as “a parlor trick, as currently conceived.”.

You could ask ChatGPT to write a plot summary for a movie where the main character is changed, and it would generate a coherent response. For example, you could ask it to write a summary for the movie Batman, but with the character Batman replaced with Darth Vader. In addition to its ability to generate text, ChatGPT can also assist with fixing broken code and writing essays that are so convincing, some academics believe they would earn an A on a college exam.

People’s reactions to it have shocked them so much that some have even declared that “big teach is dead. Then there are those who believe that this issue extends beyond technology and that human jobs are also in danger.

For example, The Guardian has warned that “professors, programmers, and journalists could all lose their jobs in a few years.” This sentiment is echoed by Information Age, the official magazine of the Australian Computer Society, which suggests that bots may be able to do certain jobs better than humans. The Telegraph has also reported that these bots “can do your job better than you.”

However hold your digital horses, we say. You won’t lose your job due to ChatGPT or any other AI just yet.

Information Age provides a great example of the limitations of large language models like ChatGPT. In a story written entirely by ChatGPT, the publication included fake quotes attributed to a real OpenAI researcher named John Smith. This highlights the main issue with these models: they are unable to distinguish between fact and fiction. They cannot be trained to do so, as they are simply AI programs designed to generate coherent sentences.

It’s intriguing to see how well-received ChatGPT has been. It is without a doubt deserving of praise, and the verified advancements over OpenAI’s previous offering, GPT-3, are intriguing in and of themselves. But the fact that it is so easily accessible is the main factor that has really drawn attention.

Although publications like the Guardian used GPT-3 to generate articles, it didn’t have a sleek and user-friendly online framework, and it only made a small online splash. The way the product is used and discussed is fundamentally altered by creating a chatbot you can communicate with and share screenshots from. The bot has been somewhat overhyped as a result of this as well.

This is strangely the second AI to make headlines in recent weeks. On Nov 15 Galactica, a self-developed AI by Meta AI, was released. It is a large language model, similar to ChatGPT, and was advertised as a means of “organizing science.”. Fundamentally, it could generate explanations for mathematical equations or responses to questions like, “What is quantum gravity? Similar to ChatGPT, you ask a question, and it responds with an answer. Galactica gave answers that seemed reasonable because it was trained on more than 48 million scientific papers and abstracts.

The bot’s creators touted it as a tool for organizing knowledge and noted that it could produce academic papers and Wikipedia articles. The issue was that most of the content it was pumping out was nonsense text that appeared to be official and even contained allusions to made-up scientific literature. Academics and AI researchers became irritated with the volume and perniciousness of the false information it was churning out in response to straightforward prompts, and went on and vented their frustration on Twitter. After two days of backlash, the Meta AI team decided to kill of the project.