Media contact

Neil Martin
n.martin@unsw.edu.au

ChatGPT has exploded into the public consciousness since it was officially released just a couple of months ago. 

The program provides human-like conversations and other textual content in response to input from users and has proved to be extremely popular in a short period of time. 

Debate is already raging about its use by students writing essays, and whether it could make swathes of white-collar workers redundant by eliminating the need for humans to produce written content in the form of reports, or emails, or even articles such as this one. 

Academics from UNSW Sydney have for many years researched the importance, and dangers, of artificial intelligence and are well-placed to understand the potential implications of ChatGPT across society. 

In 2022, UNSW launched the AI Institute to support the activities of over 300 UNSW academics working in artificial intelligence and spanning all UNSW faculties, with Scientia Associate Professor Haris Aziz as (interim) director and Professor Mary-Anne Williams as deputy director (Business).

Read more: UNSW launches Artificial Intelligence (AI) Institute

In addition, the Media Futures Hub, co-directed by Associate Professor Dr Michael Richardson, is a collection of scholars researching media and emerging technologies with the aim not only to analyse the world around us, but to help build more just futures. 

A/Prof. Aziz says that ChatGPT is actually nothing particularly new, but is generating excitement due to the amount of data it draws on, and the way it uses that data to identify text patterns and produce very human-like content.

Patterns in the text

“ChatGPT is not that different from some very standard, deep learning machine learning technologies that are in operation for many applications,” he says. 

“The biggest innovation is not necessarily in the basic idea, but the fact they have gathered 570GB of data, which is roughly around 300 billion words, to power the software. 

“That data has been examined and analysed by the AI system and patterns can be identified which is what helps it to generate the text when prompted by a question from the user.

Phone screen showing a ChatGPT response to a query

ChatGPT developers claim the dialogue format makes it possible for their program to answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests. Photo from Shutterstock

“What has captured the imagination with ChatGPT is that it doesn’t just produce one-line answers like previous Chatbots might have done – it has the ability to generate long-form responses which can summarise a wealth of information on many topics

“It is not a ground-breaking conceptual idea, but there is some great engineering that has gone into it, plus they have had a lot of support and billions of dollars of funding from the likes of Microsoft.”

Professor Williams, who also helps to grow entrepreneurship and accelerate innovative thinking in Australia as the Michael J Crouch Chair for Innovation at UNSW, says ChatGPT can be a transformational tool for accelerating learning and for improving productivity in businesses – as long as people use it with an understanding of its power and limitations.

A boost to critical thinking

She views the program as yet another technological tool that has amazing potential, with the key being exactly how it is utilised across society. 

“I think generative AI technologies like ChatGPT will boost education and learning in general by helping to promote skills in creativity and critical thinking,” Prof. Williams says. 

“As educators, we can leverage the fact students who are using ChatGPT and other types of generative AI might be presented with answers that are incorrect. Students need to strengthen their critical thinking skills to assess and improve the AI's output.

"This is one of the most important skills to develop in a post-AI world full of deep fakes and misinformation. 

“ChatGPT is just a tool that can accelerate learning because it is instantaneous and engaging. But people need to learn how to get the most out of it. Just like with any technology, you need to know what it can do and what it can’t do in order to get the best results. 

“Technology can be utilised in many different and innovative ways. I'm fully aware of the legal and ethical challenges, but I don’t think that’s a reason not to be excited about the potential of generative AI.”

Dr Michael Richardson

Associate Professor Dr Michael Richardson co-directs the Media Futures Hub and Autonomous Media Lab, and is an Associate Investigator with the ARC Centre of Excellence on Automated Decision-Making + Society.

Associate Professor Dr Michael Richardson, from the School of the Arts and Media, is more concerned about those legal and ethical challenges – as well as the potential negative consequences to society if ChatGPT is not used in the right way. 

He warns that technologies that are designed to increase productivity have the potential to instead incentivise companies to make people redundant or increase their workload.

“There is often a promise with a new technology that it will benefit workers because they no longer have to spend x-amount of time doing a certain, maybe menial, task and they can instead focus on doing something more creative,” he says. 

“But, in fact, it often just becomes very tempting for companies to simply cut their workforce and therefore everyone is just working in a smaller team and not gaining any meaningful benefit.

“I think we need to be extremely careful about heralding the purported time saving of using a system like ChatGPT to quickly and easily generate content in a work environment.”

Concern around bias

Dr Richardson is also worried that ChatGPT could potentially perpetuate existing gender and race-based bias.

“Over the past decade or so there is a well-known problem that arises with AI about the quality of data that goes into the system that can then amplify issues regarding racism and misogyny,” he says. 

“There have been some lessons learned, but these problems continue to exist. One of the problems with ChatGPT is that it could present biased, offensive, and false information as fact — and in a form that makes it hard for unsuspecting users to realise what’s happening.”

A/Prof. Aziz acknowledges that ChatGPT is far from perfect, and should definitely not be considered an oracle of truth, but highlights the fact AI-generated text can actually benefit disadvantaged members of society.

haris_aziz.jpg

UNSW Engineering's Scientia Associate Professor Haris Aziz.

“ChatGPT is being used to generate code, but if you don’t have full control over what is being produced then it would be dangerous to rely on it for something that might have a big effect on human lives – let’s say in software used to fly an aeroplane,” he says. 

“But for the majority of content it’s not an issue, let’s say for writing a brand document for a business. In that case if there are some errors then it’s not critical and ChatGPT has produced something useful in a very short period of time. 

“ChatGPT can also be invaluable for people with disability who might not be able to write or communicate to the same level as others. In addition, it can make a big difference in producing content that is not in your first natural language. 

“You could imagine people from other countries coming to Australia and not speaking or writing fluent English, but being able to use ChatGPT to write a cover letter or apply for a job where they would otherwise be at a big disadvantage.”

Huge potential

While ChatGPT has been labelled in the media as a threat to vast numbers of white-collar workers, Prof. Williams, from UNSW’s Business School, is not convinced that will be the case. 

She cites the introduction of the personal computer during the latter stages of the 20th century as proof that widespread job cuts are unlikely to occur just because of this new technology. 

Mary-Anne Williams

Mary-Anne Williams is deputy director (Business) of the UNSW AI Institute and Michael J Crouch Chair for Innovation at the University.

“This is a watershed moment for human-AI collaboration, because anyone that has used ChatGPT can see its potential,” she says. 

“People are re-evaluating everything they do and that is generating a fear response when it comes to what it might mean for jobs. 

“But the same thing was said about computers – that they were going to take people’s jobs away. The thought process was that computers were supposed to take over tasks and make people redundant. But computers actually created a whole new generation of jobs. 

“ChatGPT and its like are just the latest in a long line of new technologies that can enhance human capabilities, and business, industry and society will change over time as we learn to mitigate the risk and unlock the benefits.”