Academic Integrity

Last updated February 07, 2024

Disclaimer: this is a living document, meant as a conversation. Nothing on this page is an official rule or regulation of ETH Zurich.

Artificial Intelligence and loss of Academic Integrity are not inherently connected. We have always used tools to accomplish tasks, and Artifical Intelligence is primarily a tool. Problems arise when this tool is used in dishonest ways or when its use is wasting human time.

Instructors are strongly encouraged to set rules and provide guidelines for the assignments, projects, and assessments within their courses; there is no "one-size-fits-all." Setting these can also be a part of the discussion in your course, where your course as a learning community can work out pragmatic and fair consensus solutions.

This question is subject to much discussion in academia and society. Arguably, Large Language Models are based on a text corpus, which is the work of other humans - thus, an argument is made for calling the use of AI-generated text plagiarism.

At the same time, these proabilistic models do not reproduce any particular piece of writing; instead, documents from the text corpus are merely used to train what is essentially an associative auto-complete algorithm. Arguably, to a non-vanishing degree, humans are doing the same thing when generating what is considered original text: we write based on associations, and our associations while writing come from what we previously heard or read from other humans. After reading a book, we as humans might even start to sound like that author. Thus, in that line of thinking, an AI tool is more like somebody else building a text for us from their associations, and using AI-generated text is closer to ghostwriting.

Essential is the concept of a person's own work: turning in a text directly or slightly modified out of an AI tool and claiming it to be one's own work is academic dishonesty, independent of the discussion about plagiarism versus ghostwriting.

Accordingly, an important instrument is the DownloadDeclaration of Originality (PDF, 183 KB) (DownloadEigenständigkeitserklärung (PDF, 175 KB)), in which now usage of AI tools can be declared.

However, instructors might also allow the use of AI tools, or they might simply expect declaration of their use ... all of this is up to the instructor and the assignment.

Some learners are freely using AI-tools, others try everything to stay as far away from these tools as possible to not even remotely be accused of dishonesty. This discrepancy can result in unfair advantages.

Given the wide spectrum of courses, projects, assignments, etc., it will not be possible or even desirable to formulate a one-size-fits-all policy. Instead, in the end, instructors need to decide what is allowed and appropriate, and they should communicate these policies on a per-course per-assignment base. Students should also feel free to simply ask the instructor, human-to-human.

The goal ist to have meaningful assessments with meaningful grades. Preparing an assignment always included a determination which tools are allowed, which tools are not allowed, and which need to be declared: pocket calculator, dictionary, textbook, spell-checker, literature database, etc. Large Language Models and other AI-systems are just another tool. Tell your students if ChatGPT, Bard, Copilot, Gemini, and the like allowed or not; if the former, does their use have to be declared, and how? 

For traditional, published resources like papers or books, learners should know how to cite them, but for AI-generated text, there is no original source anymore - all links to the original source get lost in the training process. Still, the prompt can be cited, for example

[ChatGPT23b] ChatGPT, Model GPT-4. Provide a list of arguments for and against expanding the use of nuclear power in Switzerland. Accessed 12.07.2023.

Some more specific styles have been developed for citing AI, for example for external pageAPA and external pageMLA.

Since oftentimes, conclusions are reached in a dialogue, and since prompt responses are not stable, an even better solution may be to use the "Share" feature in ChatGPT, which allows to generate a sharable, read-only URL for a dialogue.

Of course, that reference would not be enough to truly support arguments. Authors would need to take the ChatGPT output as their own input and support claims and facts using published, particularly peer-reviewed, literature - an essay that only cites ChatGPT will likely simply be a bad essay.

No! In the early days of Large Language Models, authors thought it was appropriate, intriguing, funny, witty, ... to list ChatGPT as a coauthor on scientific papers, but most journals now have explicit rules against that. You would not list your pocket calculator as an author, either.

In any case, someone has to be accountable for the work, and that has to be a human author; an AI tool as no personhood. A human author can hardly delegate responsibility to an AI tool, and AI tools (or the companies behind them) cannot be held responsible for the algorithms' hallucinations, errors, and biasses - a human author has to check, validate, approve, adjust, etc., their output, and in the end, he or she is accountable.

ETH Zurich has clear Downloadguidelines on academic integrity (PDF, 1.2 MB), which among other important points include (common sense!) rules for authorship. ChatGPT could be cited or declared in the acknowledgements.

No. In spite of advertising claims and stories about "signatures" of these tools, studies show that even the best detection tools only have 80% accuracy, and every one of them delivers false positives and false negatives. Since Large Language Models are probabilistic, there is no original source text that one could use as evidence, and a student's claim of having received a false positive cannot be refuted.

Humans are frequently better judges of AI use than computers, and as an instructor, you might develop a feel for student writing versus AI writing. As an instructor, you can make the student defend the text and argue the claims made, for example along the lines of "what makes you say that? how can you prove that? are there references for this?"

It can be very frustrating: you read a text that just reeks of pure ChatGPT, and you wonder if you should spend more time grading it than the so-called author took to generate it.

However, in spite of frustration, it is important to preserve the integrity of grading:

  • If you had allowed the use of AI tools, and if it was used according to the rules that you had set, the essay should be graded like any other essay. If it's a bad essay, it simply should be graded accordingly.
  • If, however, the use of an AI tool was not allowed, and you have a strong suspicion, you should start a disciplinary process, so due protocol is followed. Also, if students were allowed to use AI tools, but do not indicate this contrary to the instructions of the lecturer, then this is deception in terms of examination law

AI tools are currently very good at producing plausible fiction, but they are also respectably good in introductory STEM courses. When it comes to only the assignments, they can likely pass courses like physics or introductory computer science. When it comes to academic dishonesty, though, in STEM course, the traditional ways like copying from fellow students might still be more efficient.

In such courses, a consideration would be to explicitly allow all AI tools during certain phases of the course or parts of exams, as students will likely use such tools in their future professional life.

A lot of the same things you might have done before AI became widely available, only stricter:

Also, remind students that they are fully responsible for the work they turn in, and that it is their responsibility to provide enough material and background for you to judge their competencies.

Like with any other tool: if the instructor sets guidelines for tools that can be used for an assignment, or how their usage needs to be disclosed, and a learner violates those stipulations, this is a regretable but normal case of academic dishonesty. ETH Zurich has solid mechanisms in place to deal with academic dishonesty, see Ordinance on Disciplinary Measures (external pageDisziplinarverordnung).

The library offers courses on scientific writing, in particular also Scientific writing – Using ChatGPT effectively and responsibly.

Comments, suggestions, etc.: Gerd Kortemeyer,

JavaScript has been disabled in your browser