If you could use a tool to remove all the complexities of writing research papers, presentations, or reports, you’d jump at the opportunity, right? Released in December, ChatGPT caused headlines due to its unexpected proficiency in creating texts of all types, in a few seconds, based on any input from a user.
The unexpected power of ChatGPT has generated a lot of interest and worry, especially in the areas of marketing, programming, and academia, in relation to both the replacement of jobs and the quality of work produced; however, the consensus is still that human expertise is needed to verify and improve on what is essentially AI’s approximation of what a satisfactory piece of work looks like.
But what if humans have not yet had the chance to develop this expertise? The use of AI tools by students has become a concern for both schools and universities, causing them to rethink their approach to assessments and teaching and learning in general.
What is ChatGPT and how does it work?
GPT-3 (Generative Pre-Trained Transformer), is the third iteration of the open-source language model developed by the company OpenAI. The chatbot interface, called ChatGPT, is built to receive inputs and give answers, so you can ask it detailed questions, request specific information (along with the desired formatting), or have an in-depth conversation.
Unlike the chatbots we regularly interact with on the web that spit out pre-programmed answers along narrow parameters, ChatGPT is—as referred to above—a Large Language Model (LLM). The LLM is able to produce impressive text as it has been trained using deep learning, a form of machine learning that allows a computer to make connections between vast amounts of information in a similar way to the human brain.
ChatGPT is far from the only AI chatbot out there; there are many different tools that are already being marketed as virtual assistants, marketing content wizards, or virtual friends. Then there are the ones with huge institutional backing such as Chinchilla, developed by DeepMind, and LaMDA, developed by Google. The latter made news last year as a Google AI engineer was put on leave after describing the chatbot as sentient.
So what makes ChatGPT special?
ChatGPT has made such waves for two main reasons. Firstly, it is the most advanced tool yet, owing to the fact that it has been trained on a bigger dataset than its competitors—175 billion parameters, (described as values a learning algorithm can change independently as it learns) and 570 gigabytes of text. For comparison, LaMDA has 137 billion parameters, and GPT–2, the last iteration from OpenAI released in 2019, has just 1.5 billion parameters. This shows the huge leap that the technology has taken in such a short period of time. Secondly, the ChatGPT trial, released to the public in November, picked up 1 million users in its first five days, who first marvelled at the chatbot and then began quickly incorporating it into their work.
What are the fears around AI writing?
We have long accepted that artificial intelligence will replace most manual jobs, but we believed the technology was a long way from getting rid of so-called knowledge workers.
While some have put raised alarm bells, claiming that AI will have a significant impact on the job market, others have taken a more moderate view, suggesting we can use it to be more efficient and produce better quality work while utilising our knowledge and uniquely human qualities to greater effect.
ChatGPT and Australian education
A key part of integrating AI chatbots (in their current state) is to make sure that we can discern quality information from convincing lies. To be able to do this, we need to draw on our hard-won skills and subject expertise in a given area; but how can this be ensured if chatbots are being used in place of encoding information and demonstrating knowledge? This is the particular issue that is arising in Australian schools and universities, as students have been using ChatGPT to do assignments and cheat on exams.
The fact that ChatGPT has been used by students to create original work that evades plagiarism filters has meant that universities have had to redesign how they run and structure assessments; in some cases, returning to pen-and-paper examinations. This is a necessary step, considering that the AI model has the potential to pass US medical licensing exams on behalf of a student who does not have the requisite knowledge.
Incorporating AI within the classroom
With technology you can’t put the genie back in the bottle, so what can we do? Considering we have a tool with such wide-ranging applications, we can learn to make it work for us. As Toby Walsh, Scientia professor of artificial intelligence at the University of New South Wales pointed out when interviewed by the Guardian, “…did calculators destroy numeracy?”
While rightfully putting in place policies to reduce instances where students could use chatbots instead of demonstrating knowledge on their own, we can also provide students and teachers with the tools to augment their learning by harnessing AI chat.
ChatGPT can be used to:
- Mark and provide feedback to work created in class.
- Create quizzes and practice problems.
- Generate text prompts that can be expanded upon.
- Generate custom stories based on prompts.
- Break down difficult texts into simpler versions for English as a Second Language (ESL) students or students with disability.
- Work like a dictionary for difficult words, phrases and concepts.
- Quickly organise information that would otherwise need to be done manually.
- Create seating plans and undertake other purely administrative tasks.
- Generate essays that students can examine critically and improve upon.
This last point is extremely important. As our expectations of AI-produced text have indeed been surpassed by ChatGPT, there is also a case of style over substance when we take a closer look. A recent article in Wired that interviews teachers incorporating the tool into their classes highlights the following point:
The students were dazzled by how quickly the chatbot rendered information into fluid prose—until they read it with a closer eye. The chatbot was fudging facts. When students asked it to back up an argument with citations from scholarly texts, it misattributed work to the wrong authors. And its arguments could be circular and illogical…contrary to fears of a cheating epidemic, copying from ChatGPT wouldn’t actually net them a good grade.
With chatbots in their current form, we can use them to examine convincing-looking yet subpar text, with an eye to having students apply this critical thinking to things like news and social media posts.
In summary, this technology is still in its infancy, meaning caution should be observed while also being open to the many new possibilities that it promises. The ideas listed above are just the tip of the iceberg, but give an idea of how the burden on teachers can be eased while making learning a more engaging process for students—without taking away from the core skills that should be developed.