Maintaining academic integrity in the age of generative AI

Ensuring academic integrity amidst the rise of generative AI

Over half of students (53%) said they have used generative AI to prepare assessments, according to recent research by HEPI and UCAS.

In this blog we explore what academic integrity means post-ChatGPT and outline strategies for helping students to engage with generative AI tools effectively and responsibly.

What is academic integrity and why does it matter?

Academic integrity is regarded as the foundation of higher education.

To uphold this, students have a responsibility to act with honesty when completing assignments. That means submitting original work, referencing sources, and not engaging in dishonest acts like contract cheating (using essay mills), plagiarism and collusion.

The consequences for students who act without academic integrity can be severe. The Quality Assurance Agency (QAA) states: ‘Students who commit academic misconduct, especially if they deliberately cheat, risk their academic and future careers.’

This issue has ramifications beyond universities too. There’s a risk of graduates entering the workforce without the necessary skills, which could have serious consequences. It may also undermine public confidence in the value of a degree and the higher education sector.

How does generative AI affect academic integrity?

Since the public release of ChatGPT on 30th November 2022, large numbers of students have been using this generative AI tool for academic purposes.

It has proved capable of passing exams and producing text in a human-like tone, leading to one professor describing ChatGPT as ‘the greatest cheating tool ever invented’. Added to this, AI detection software has struggled so far to reliably determine whether students are using generative AI to write assignments in real-world settings.

Further, the new generative AI tools are blurring boundaries around what we mean by ‘original work’, ‘plagiarism’ and ‘collusion’.

If a student asks an AI tool to correct their spelling and grammar, is the amended text still their own work? What about if they ask an AI tool to rewrite their text in a more academic style?

If a student uses an AI tool to generate text and uses the unedited text in their assignment, that’s plagiarism, but what about if they rewrite the AI-generated text in their own words first?

If a student is colluding when they work with a peer on an individual assignment, is it collusion if they use an AI tool to generate ideas for an assignment? Similarly, if a student is colluding when they receive unauthorised help from an academic, is it collusion to use an AI tool as a personal tutor to give feedback on a draft assignment?

Student using generative AI on a laptop

What do students think?

In the first UK-wide survey since the launch of ChatGPT, HEPI and UCAS polled 1,250 undergraduate students on their attitudes to generative AI tools.*

The survey asked students which uses of generative AI, if any, they would consider acceptable when preparing an assessment. A majority of students felt that AI was acceptable for explaining concepts (66%), suggesting ideas for research (54%) and summarising articles (53%).

Only 3% of students considered it acceptable to incorporate AI-generated text into assessed work without editing it first, while 17% of students thought it was acceptable to use AI-generated text in assessed work after editing. Around one-in-seven students (15%) didn’t consider any uses of generative AI for assessed work acceptable.

Based on these findings, it appears that many students are comfortable using generative AI when preparing assessed work, perceiving it as a study aid rather than as a tool for ‘cheating’.

So how do we arrive at a shared understanding of what academic integrity means and how do we help students to use generative AI appropriately? We’ve put together five suggestions, drawing on expert opinions from sector bodies including the QAA, the Russell Group and Jisc.

1) Create clear guidelines

Set out clear institutional guidance on what the acceptable use of AI in learning and teaching means for both students and staff.

For any policy to be meaningful, everyone needs to understand what AI ‘use’ actually entails: what kinds of AI use do students need to declare when submitting an assessment?

It’s likely that permitted tools and acceptable practices will vary between disciplines, so institutional guidance would need to reflect this, informed by external subject associations.

As generative AI tools evolve, guidelines must also keep pace with new technological developments to ensure they remain relevant.

2) Empower staff

Ensure that all staff understand how to use AI tools, as well as the benefits and limitations of generative AI for learning and teaching.

There will be variations in knowledge and confidence levels amongst staff about using generative AI, so make training opportunities and resources available, backed by support from university leadership for all staff to spend time learning these skills.

3) Educate students

Teach students how to use generative AI effectively and responsibly.

Arguably, librarians are uniquely placed to lead AI literacy within institutions, as it builds on skills they teach already, such as information literacy, critical thinking and referencing.

This instruction could include explaining about the different AI tools available, how to write an effective prompt, copyright and ethical considerations, plus good referencing practices.

One of the most concerning results from the HEPI/UCAS survey was that 35% of students didn’t know how often ‘hallucinations’ appear in AI-generated text, suggesting that support on how to critically evaluate generative AI outputs is vital for many students.

Students and an educator working together

4) Promote open conversations

Create an environment where students feel they can ask questions about generative AI tools openly with no fear of being judged or penalised.

An ongoing dialogue between students and staff about generative AI will contribute to a shared understanding about the appropriate use of these tools in education.

In fact, Jisc research found that students want to play an active role in discussions about AI in education, believing that ‘inclusive participation’ is key to shaping institutional decision-making.

5) Rethink assessment

The advent of generative AI tools means that traditional assessment methods in higher education need to be adapted.

Should universities return to unseen in-person exams? This seems like a retrograde step that would disadvantage many students and encourage rote, rather than deeper, learning.

Instead, synoptic assessments (requiring students to synthesise knowledge from different parts of their course) and authentic assessments (where students are asked to apply their skills and knowledge in real-life settings) may offer alternative assessment methods.

Reimagining assessment is not going to be easy, but for Sir Tim O’Shea, this is an opportunity to ‘do education at a higher level, to ask higher order questions, to encourage students to reflect’.

Looking ahead

In the wake of generative AI, what constitutes academic integrity is subject to intense debate.

While these tools present challenges to concepts like originality, plagiarism and collusion, the risk of not incorporating these tools responsibly into higher education is arguably far greater.

Students will be graduating into an AI-driven world and future employment opportunities will require technological literacy. We need to make sure students are well-prepared.

To download the policy paper based on the findings of the HEPI/UCAS survey, tap here.

*Kortext provided some financial support for this research. However, full editorial control was retained by HEPI.

Leave a Reply

Your email address will not be published. Required fields are marked *

Name *

Top