Menu Close
A page of coding.
Markus Spiske/Pexels

‘A study buddy’ that raises ‘serious questions’: how uni students approached AI in their first semester with ChatGPT

When ChatGPT burst onto the scene in November last year, there was intense speculation about the implications of this technology for university teaching and learning.

There was panic about what it would mean for cheating as well as some excitement about helping students learn and academics teach.

But what has actually happened as universities have gone back to teaching and study?

Our new study looks at how Australian students and academics found ChatGPT in the first semester of university under this new technology.

The story so far

When ChatGPT was released in late 2022, academics were left “stunned” by the ease with which it could write university-level essays and pass some exams. And do so in ways that were largely indistinguishable from a human student.

This immediately prompted concerns about cheating and academic integrity, although some hoped ChatGPT and similar technologies may improve teaching, learning and assessment. Experts have suggested generative AI tools could support deeper learning for students and save time for academics, preparing lessons.

Amongst this, there have been calls for more attention to be paid to students’ perspectives. After all, they are at the centre of this change.

Our study

Between late April and late May 2023, we surveyed Australian academics and university students via an online questionnaire.

The 110 respondents (78 students and 32 academics) represented all states and territories, and a range of university courses and areas of study.

This article just speaks about the student results.


Read more: We need to change the way universities assess students, starting with these 3 things


At this point, many students are NOT using ChatGPT

At this early stage, almost half of all student respondents had not yet tried or used generative AI.

Of this group, 85% did not intend to use the technology at university this year. Our findings suggest students may be worried it will be seen as cheating.

This group of non AI-using students strongly related the use of generative AI within assessment to cheating (85%). This was significantly more than those who had used AI (41%).

In their written responses some students also suggested they were avoiding it because it felt unethical. As one student told us:

Although current AI is harmless, I think there are serious questions about whether future advancements will be safe for humanity.

Students also listed other worries, such as unreliable information:

Information given may be biased. [It’s] very difficult to fact check – as generative AI can often not properly say where it got its information from. For similar reasons, plagiarism and breaches of copyright.

‘It’s super useful’

Students who used generative AI talked about it as a “launch pad”, to brainstorm ideas, get a better understanding of a topic or write an essay structure.

I use it to summarise lengthy articles […] I use it for feedback and suggestions for improvement.

They highlighted the interactive nature of programs such as ChatGPT. They said it was like having a “partner” in learning. As one student said:

I feel like it’s super useful (especially with COVID impairing face-to-face learning, peer study groups etc). It’s a nice study partner or support.

Another told us:

It leads to a more efficient use of time and energy. It makes me feel less stressed and anxious about assessments, as I almost feel as though I have a study buddy or friends to help me through.

In this way, we can see generative AI being used as a way to help manage stress. This is significant, as research has previously suggested increased stress can increase a student’s desire to cheat.

But students are confused

Students reported confusion about how the technology can and “should” be used.

For example, they were divided about whether universities should allow generative AI to be used for assessment, with 46% agreeing, 36% disagreeing and 16% unsure. Almost a quarter of students reported feeling unsure about the use of generative AI in university contexts generally, and only 8% felt very positively about it.

This confused response is not surprising – many universities are yet to provide clear guidance about this. Less than one third of the top 500 universities in the world had a clear response (be it positive or negative) to the availability of ChatGPT when their policies were reviewed in May this year.

What happens now?

As generative AI continues to evolve, it presents an opportunity to explore new frontiers in higher education. The early indications are it is not all scary or bad.

However, our research shows some students may not want to engage with the technology unless the “right” way to do this is very clear, and access and use is equitable and ethical.

As we move forward, employee voices will be important as university graduates enter the workforce in the era of AI. But we also need to keep listening to students.

Our study will continue to monitor how students and academics use generative AI as we move into semester 2.

We invite students and academics to contribute their perspectives. Our survey is anonymous and can be accessed here.

Want to write?

Write an article and join a growing community of more than 182,700 academics and researchers from 4,947 institutions.

Register now