In part 1, we looked at the characteristics of online quizzes and explored how they could be used to assist or assess learning. In part 2, we explored the various question formats and the types of learning for which they are best suited. In part 3, we moved on to the writing of the questions, in particular the traps to avoid. In part 4, we saw how quizzes can be presented as games. In this final part, we look at the steps you can take to make your quizzes robust and reliable.
Assuming your quiz is being used to test knowledge, then you need to take some care to ensure that it performs this function effectively. Prepare at least one quiz question for each of your knowledge objectives. You cannot be sure that a learner has achieved mastery if you test only a sub-set of your objectives. To be absolutely sure the learner has not simply got lucky by guessing answers, you may well prepare more than question for each objective. Don’t write questions to test skills, unless you are absolutely sure quiz questions are capable of assessing these effectively, which is likely to be rarely.
As we discussed in part 2, you need to select a question format that’s appropriate to the type of knowledge you are testing, For example, if you need to test recall of a technical term, use a text input question and not a multiple choice, which only tests recognition of the term. Don’t be tempted to select different formats simply to increase variety – that’s not your purpose here.
If your objective is that a learner is able to come up with a response quickly, then add time limits to your questions.
Some people reckon they can pass any multiple choice quiz by guessing the right answers. Your job is to prove them wrong. In part 3 we looked at techniques you can use to make life difficult for the chancer – no give-away distractors, no obvious right answers. A simple improvement would be to prepare at least four options for each multi-choice question, and even better five. That does make it even harder to write the questions, but then there really is no pain, no gain when it comes to question writing.
Another technique you could try is to include a ‘don’t know’ or ‘not sure’ option for each question. This would score no points. Then penalise wrong answers with negative scores. This ups the stakes for the learner who wants to guess the right answers.
The greater the reward for passing an assessment, the more tempting it becomes to cheat. Really high-stakes assessments are beyond the scope of this guide, but you should be aware of the difficulties in authenticating whether the person answering the questions really is who they say they are. All sorts of complex and expensive technologies are available to authenticate remote users, including finger-printing and retina scanning, but the only way you can be really sure that the learner is who they say they are and is getting no help from a third-party or some reference source is to have them attend a testing centre which has an invigilator present. Most quizzes are not that serious, so there’s no point getting carried away with the security!
A more routine way to avoid cheating is to randomise the order in which the questions are presented and the order in which options are displayed within the questions themselves. That way, no-one can simply write down the question and option numbers and pass them on to others. A step further is to create a bank of questions from which the system selects the questions to display, which means that every learner will receive a different set of questions. Yes, this is a lot more work, but the chances of successful cheating will be much reduced.
Assuming you are using a quiz as a form of assessment, then if you tell the learner whether they have got each question right or wrong, you are making it easy for them to pass the quiz on a second attempt, without necessarily curing any misunderstandings they may have had. To avoid this problem, you could create a completely different quiz for second attempts, or have the system draw questions from a bank, as described above.
At the end of the quiz, inform learners whether or not they have passed. If your software allows it, let them know how they performed against each of the topics addressed by the quiz. Pass or fail, provide advice to learners on what they should do next.
If the quiz is being used in a formative manner (to help the learner progress towards the learning objectives), rather than summative (to assess mastery), then it is vitally important that you provide helpful feedback for every question. Ideally this should be provided for each option of each question, rather than just for all correct answers and all incorrect answers. The purpose of this feedback is to correct errors and misunderstandings and to reinforce key learning points.
Another consideration is how you score correct answers. Most authoring tools will allow you to specify the number of points you will award to each correct answer. In a simple multiple-choice question, this is straightforward enough – you either allocate the same number of points to each question or award more points for particularly difficult questions.
The difficulty comes with questions that ask for multiple responses. The first consideration is whether these questions should score higher than MCQs because they are actually asking the learner for a series of decisions, not just one. Another issue is how you apportion the points across the various options. Let’s say there are five alternative options, three of which are correct. Ideally, each correct option will score 20% of the available points. But the learner should also be rewarded for not choosing incorrect options, so each option not chosen should also score 20% of the total. Whether you can achieve this with your authoring software remains to be seen!
That concludes this practical guide. A PDF version will be available shortly.
Next up: how to create reference material.