Academic Integrity

In the age of generative artificial intelligence, it is all the more important to uphold academic integrity at all times by ensuring that lecturers, students and staff adhere to the principles of responsibility, transparency and fairness.

Integrity as an attitude

The principle of scientific integrity not only applies to research, but should also be actively practised in teaching. Generative AI blurs the previously known boundaries between fact and fiction. This is why the ‘human-in-the-loop’ is always needed to validate and scrutinise results and check for biases. This responsible and transparent approach is a cornerstone of academic integrity.

The creator of content is always responsible for its correctness and quality. This principle continues to apply and relates to course materials and documents, but also, for example, to learning records and academic papers. In view of the existence of generative AI, it is all the more important to fulfil this responsibility at all times and to set an example for others.

  • Texts and ideas generated by AI are based on probabilities with no link to reality. The task of establishing this connection lies with the user.
  • Generative AI can make mistakes, draw false conclusions and also reference incorrectly. It is up to the user to check this in detail for correctness.
  • The output of GenAI is based on the training data used and the programmed algorithms. Biases can therefore occur at any time, which must be carefully checked by the user.

Other important points of a responsible attitude can also be found under good scientific practice and at the ETH AI Ethics Policy network, which participates in the global dialogue on the responsible use of AI.

The use of generative AI should be made transparent at all times. This applies not only to academic work, but also, for example, to course materials, presentations or the use of generative AI for assessment or feedback. The following basic rules should therefore be observed:

  • Be explicit about which content or parts of materials and work have been created with the help of generative AI.
  • Create transparency regarding the use of AI-based tools in lessons.
  • Discuss the use of GenAI for feedback and assessment openly.

It is important to respect the privacy and copyright of the content being worked with. AI tools require a large amount of data in order to train the underlying models. Input by the user is often reused for training and it is therefore crucial to know the respective data protection regulations and to handle protected materials correctly in accordance with them.

  • If data is passed on to AI-based tools or released for them, it must be clarified in advance that no rights are being violated.
  • For private or non-publicly available data, only tools that guarantee data protection regulations should be used (see also tools & licences).

Scientific work

Lecturers are encouraged to establish rules and guidelines regarding the use of generative artificial intelligence for assignments, projects and assessments in their courses. There are no universally applicable solutions, so clear communication between lecturers and students is all the more important. The definition of these rules can also be part of the discussion within the course, through which pragmatic and fair solutions can be jointly developed and subsequently documented.

In course materials and scientific papsers, care should generally be taken to correctly refer to the source of content and specifically also of image material. This must also always be correctly indicated when images and texts are generated by generative AI.

The various citation styles have issued guidelines for referencing AI tools, for example for external page APAexternal page MLA or external page Chicago style (further guidelines can be found at external page IEEE Style Manual and external page Harvard referencing). Detailed information is available on the external page Cite Them Right platform licensed by the ETH Library. A good overview of when and how to refer correctly can be found at external page Guidelines 'Citing AI Tools' (Universität Basel, Schweiz) or external page Citation and Writing Guidance for Generative AI Use in Academic Work (Dudley Knox Library Monterey, USA).

ETH Zurich's declarations of originality were adapted to include a passage on the use of artificial intelligence. These now include three options with the following focus on the use of generative artificial intelligence:

  • Generative AI technologies were not used.
  • Generative AI was used and labelled.
  • Generative AI was used and not labelled in consultation with the person in charge.

The choice of option should always be clearly defined in advance or agreed between lecturers and students. Option 3 is only recommended in special cases where generative AI is an integral part of a work and therefore does not need to be labelled directly.

From a legal perspective, nothing has changed with regard to scientific integrity and plagiarism. The ETH Library explains how to deal with the new circumstances under Plagiarism and Artificial Intelligence (AI).

Technical recognition of output generated using generative AI is currently unreliable and will probably remain so in the future. Trust in such a method is therefore not appropriate.

The development of generative AI requires a critical reflection on the role of writing in education and science. Responsible action by researchers, teachers and students is necessary in order to exploit the potential of the technology and minimise risks at the same time. In the discussion paper external page Wissenschaftliches Schreiben im Zeitalter von KI gemeinsam verantworten (Hochschulforum Digitalisierung) these aspects are examined from different angles.

If AI tools are used contrary to the lecturer's instructions, the existing processes continue to apply. Violations such as the use of unauthorised tools or failure to disclose their use will be subject to disciplinary action (see also external page Disziplinarverordnung ETH Zurich).

JavaScript has been disabled in your browser