Didactic suggestions and tips
Last update: 01.03.2022, 16:19
- Core requirements
- Designing tasks
- Examination formats
- Further materials
Core requirements
Learning objectives and constructive alignment
Performance assessments should test the competencies set down in the learning objectives. If you are uncertain about how to design your performance assessment, it is useful to orient it first according to the learning objectives: how far does the selected method enable a valid estimate of whether the learning objectives have been achieved? Aligning the performance assessment with the learning activities of the course unit (constructive alignment) is also important.
Promotion of learning and validity
Performance assessments (including examinations) should guarantee two things. First, they should promote learning: a performance assessment should be designed such that both good preparation before and work during the performance assessment serve the learning objectives in an optimal way. Second, they should render valid assessments of students’ achievement of learning objectives. Good didactic design of performance assessments often improves both how they promote learning and validity. However, compromises between these two core requirements are usually necessary. Here it is worth giving priority to promotion of learning, unless the performance assessment has a strong selection or certification function (e.g. the first-year examinations).
Fairness and transparency
Fairness and equal treatment are essential basic principles in performance assessments. The selective function of performance assessments makes it their task to distinguish between students who achieve learning objectives to varying degrees. All other discriminating factors are unacceptable. Good validity and promotion of learning are the prerequisites for fair examinations. What you expect from them in the examination, and what awaits them in the examination, must also be communicated transparently and clearly to students. The assessment criteria must be clear, and must be identical for all students. Here the current situation, and short-notice changes to alternative, perhaps unfamiliar forms of performance assessment, can generate great uncertainty among students.
Competence orientation
According to Weinert (2001, 27f), competency is defined as "the cognitive skills and abilities that individuals possess or can learn and that are required for solving specific problems, together with the associated [...] readiness and ability necessary for applying the problem solutions successfully and responsibly in variable situations".
Competence-oriented assessment therefore is associated with a focus on the successful application of the competencies specified in the learning objectives in problems of varying degrees of novelty and in specialist work situations that are as authentic as possible.
Designing tasks
Assessing understanding rather than knowledge
So-called “WH” questions (why, how, howcome, what for, etc.) are a simple and often fruitful means of transposing convergent knowledge questions into more open tasks. In addition to factual knowledge they require students to understand this knowledge; mere memorisation or research will not help with the answers.
Assessing knowledge in use
Even if knowledge is reproduced correctly, if it cannot be applied to concrete and novel problems it is inert. A good way to assess knowledge in use is to employ case studies/examples and projects with tasks that address authentic issues. We process knowledge in use far more deeply, which promotes long-term, transferable learning. Conversely, performance assessments based on tasks that require knowledge in use enable assessing the depth and transferability of students’ knowledge.
Assessing deep understanding with the aid of new learning resources
In-depth understanding and transferable learning can also be understood as preparation for future learning (PFL). A chess master grasps a novel strategy more quickly and understands it more deeply than a novice. This can be usefully applied in performance assessments, where as part of their task students receive new learning resources (e.g. a scientific article, a new course-related statistical model etc.) Here it is important that students cannot simply complete the task using their previous knowledge, but need to use and learn from the new learning resource.
Assessing resource-rich
Instead of learning resources, students can also be supplied with working resources. These might be an (e.g. historical) text to work on, specialist software (e.g. a programming environment), a data set, or an academic database etc. In most disciplines the application of knowledge is addressed via discipline-specific resources and tools (e.g. programming, literature research, laboratory work). Successful application of knowledge via the coordination of such resources requires deep, internalised and usually transferable (specialist) knowledge.
Assessing collaboratively and/or with peer feedback
One way to assess larger groups of students using open tasks is to have them work in groups (e.g., two to five members). This can keep the increased workload of manual correction manageable, especially when there are pass/fail or group grades instead of individual grades. Similar considerations apply to peer feedback and peer review. Here students first evaluate the work of their colleagues and give feedback. Then examiners can look at the evaluation and the feedback and accept/reject, extend and/or correct it. The Moodle workshop activity supports the entire peer feedback process. A great advantage of collaborative testing and peer feedback is how well they promote learning.
Combined performance assessments
Combining various (alternative) forms of performance assessment can allay the weaknesses of individual forms. For example, it might make sense to follow up group work with short oral examinations in order to give individual grades.
Question types for examinations in Moodle
Essay
This question type requires formulating answers freely and independently. Because students can easily edit their answers on the computer and therefore formulate them more concisely and coherently, incorrect or poor answers are often easier to identify as such. At the same time, essay questions on the computer offer the advantage of legible, typewritten texts. This can reduce correction time by up to 50% compared to handwritten, paper-based examinations. Furthermore, many examiners report that the typewritten answers on the computer in comparison are also more comprehensive and more correct. The essay format is well suited for divergent tasks with a large or indefinite number of possible solutions, for ill-structured problems, or for qualitative questions. Assessing factual knowledge on the other hand, is usually better suited to closed-ended question formats (e.g. multiple choice) or semi-closed formats (e.g. short answer).
The Moodle Essay question type is also used when files need to be uploaded during an examination. In examinations with third party applications, students may create or edit files, which they then submit for assessment.
Multiple Choice
The multiple choice format enables efficient assessment of student knowledge and conceptual understanding based on questions or problem statements with some level of convergence in their solutions. All multiple choice formats build on students selecting the appropriate response to a question from a selection of given response options. The ETH Examinations Moodle supports three different multiple choice formats, each with its own Moodle question type. The question types differ in the possible number of options, the possible number of correct answers, as well as the available scoring methods:
- SC(ETH): As expressed in its name 'Single Choice', exactly one of the response options must be selected as correct in this question type. It is always recommended to ask for the 'best' response rather than the 'single correct' response. In other words, each option by itself does not have to be strictly correct or incorrect, but instead one of the options has to, clearly, be the best response. This greatly facilitates the design of more nuanced questions that require students to demonstrate more than memorised factual knowledge but also a deeper understanding of concepts.
If one asks for the single correct answer instead of the single best answer and each of the options by itself is strictly correct or incorrect, then the Single Choice format is not recommended, because students could identify options as correct (or incorrect) by procedure of exclusion. Instead, the use of the MTF or the Kprime format is recommended, when each option by itself is strictly correct or incorrect.
- MTF(ETH): In the 'Multiple True-False' question type, multiple response options must each be evaluated individually as either correct or incorrect (or some other dichotomous category, e.g. blue/red, mammal/bird, etc.). There is no strict limit to the number of response options. However, to safeguard readability, it is recommended to use only as many response options as can be displayed on one (screen-) page (without having to scroll). This question type is recommended when multiple aspects require consideration in a question or problem statement.
- Kprime: The 'Kprime' question type is a special case of the MTF (see above) question type with two important distinctions. First, this question type always consists of exactly four response options. The second difference is the scoring method. By default, this is the "Kprime" scoring method, which works as follows: A full score is awarded if all responses are correct, half score for three correct responses and zero points otherwise.
Other Moodle question types with (partial) auto-scoring
- Short Answer question type: In these question type, students enter short text answers, which are auto-scored by comparing them against a predefined correct response (e.g.: Which river flows through Zurich? 'Limmat'). Caution is advised however, as small deviations from the predefined correct response, may lead to otherwise correct responses inappropriately being scored as incorrect (e.g. 'Limatt'). For this reason, all responses to items in the short-answer format that were auto-scored as incorrect, require a brief manual review. To minimize possible complications, it is furthermore recommended to keep the expected responses as short as possible.
- Drag-and-drop question types:
- Drag-and-Drop Matching: In the question type "Drag-and-drop matching", terms or pictures in a first list need to be assigned to items in a second list by drag-and-drop. It is possible to define a larger number of response options than the number of items that require matching.
- Drag-and-Drop into Text: In this question type, students fill in missing words in a cloze text via drag-and-drop from a predefined set of response options. It is possible to define a larger number of response words than the number of available gaps.
- Drag-and-drop onto images: This question type lets students drag and drop pictures or text elements on predefined spaces in a picture. This question type can, for example be used for labelling the parts of a cell or the elements of a model. It is possible to define a larger number of response options than the number of available drop zones in the image.
- Cloze: The Cloze question type enables the combination of several response formats (short answer, numerical response and multiple choice) within one question. Cloze questions can be specified by following a custom external page syntax. Alternatively, Cloze questions can also be specified through an editor. Since the possibilities of single-choice/multiple-choice questions are limited in this question type, we recommend using the question types SC(ETH), MTF(ETH) and Kprime for multiple-choice questions.
Examination formats
Examinations with third-party applications
ETH Zurich conducts its On Campus Online Examinations in secure environments using technology such as the Safe Exam Browser (SEB). SEB consists of a browser component and a kiosk component. The latter secures the computer in a so-called "kiosk mode" during the examination. In this mode, access to unwanted system functions, programmes, websites (except Moodle) and other resources is prevented. If access to specific programmes or other electronic resources (e.g. scripts, datasets, websites) is explicitly desired for an examination, an advanced setup with a combination of SEB, Virtual Desktop Infrastructure (VDI) and Moodle is used. This setup enables targeted configurations that ensure access to the desired electronic resources while denying access to all others. This enables the design of complex and authentic examinations tailored to disciplinary practices by integrating specialist software (e.g. Jupyter Notebook, R-Studio) or granting access to information resources such as lecture notes, textbooks, or online repositories in open-book settings. The setup is specified together with LET at the beginning of the semester and communicated to the students so that they can prepare and familiarise themselves with the examination setting throughout the semester.
Open-book examinations
Open-book examinations grant students access to lecture notes, textbooks, students’ (handwritten) notes, problem sets from the semester, or online databases while sitting the examination. Student notes in electronic form can also be made available in On Campus Online Examinations. Open-book examinations have a long tradition at ETH: Approximately every second written examination at ETH is a variant of an open-book examination.
The open-book format facilitates designing examinations that are more complex and that probe students’ understanding more deeply. If they contain extensive and novel information resources – which are not yet or only partially known to the students and which must be consulted for the successful completion of tasks – then they are furthermore well suited for assessing information literacy, critical thinking, and transfer of knowledge.
Consequently, successful preparation for such an open-book examination depends on the ability to understand, apply, and critically reflect the course contents. Thus, open-book examinations can also help deemphasize rote learning and blind memorization and help promote deeper learning during examination preparation.
Typically, the resulting open-book examinations are considerably more complex than comparable closed-book examinations. Consequently, students should be granted more examination time to answer the questions. In order to prepare students optimally for an open-book examination, it is important to inform students about the examination form early on and to appropriately familiarize them with it in learning activities during the semester.
Further materials
- Tips for examination development
- Download Guidelines for conducting examinations [in German]
- Download Guidelines on grading written examinations
- external page "Fifty tips for replacements for time-constrained, invigilated on-site exams" (Brown & Sambell)
- external page "Academic integrity, assessment security and digital assessment" (CRADLE)
Contact
Lehrentwicklung
Haldenbachstr. 44
8006
Zürich
Switzerland