Just read an article in our local newspaper that indicated the state of Pennsylvania spent $30 million dollars last year on testing. It kinda made me sick…especially since fill-in-the-blank testing only really has value when measuring a person’s ability to be successful at fill-in-the-blank tests, no real-world application.
Games would be so much better at assessing ability to apply knowledge. So here are some articles and ideas about using games for assessment of the application of skills and measuring a level of learning.
Here is an interesting article titled Proof of Learning: Assessment in Serious Games. The article outlines some good ideas for integrating assessments into games and why simply sticking a multiple choice question into an otherwise “cool” game doesn’t make it an educational game or even fun.
Because computers can quickly and accurately grade [Multiple Choice Questions] MCQs, those types of questions have become the foundation of almost all modern testing. This makes MCQs the obvious first choice, and often the easiest choice, for assessment in serious games.
MCQs are not always the best choice, though. While MCQs can accurately gauge memorization and retention of a set of facts, they are hardly the best way to gauge whether the student is following a process correctly. This is a notable shortcoming because some disciplines, such as advanced math, are more about the processes used to reach the answer and less about the answer itself. Multiple choice math tests can only provide a list of possible answers and have no easy mechanism for determining whether the student figured the answer out properly or merely guessed well.
Instead of multple choice, here are a few alternatives:
- Completion Assessment -Did the player complete the level in the game? How long did it take? How many attempts? Was it the preferred path?
- In-Process Assessment – How did the player choose his or her actions? Did he or she change their mind? If so, at what point? What was the motivation behind the choice? Was it congruent with other decisions or did it appear to be a guess?
- Point Assessment -Did the player score enough points to be considered successful? Did they score them in the right area?
These are just a few ways in which assessments can be re-considered and we can break out of the multiple choice assessment trap many e-learning modules seem to have fallen into. These suggestions also provide a way of creating authentic assessments in the context of educational games.
Here is another interesting thought position about embedded assessments in games. Melding the Power of Serious Games and Embedded Assessment to Monitor and Foster Learning: Flow and Grow
The articles states that “a game that includes stealth assessment must elicit behavior that bears evidence about key skills and knowledge, and it must additionally provide principled interpretations of that evidence in terms that suit the purpose of the assessment.” Meaning that the behaviors you want to measure must be exhibited in the game to achieve success. A practical application, you ask?
How about The Cheesecake Factory which created a mobile game in the form of an App that teaches servers how to make a hamburger or glamburger as they call it.
Here is the description from the article “Gamifying” Training [With The Cheesecake Factory]
When the company teaches employees about building glamburgers, it does not ask them to memorize the ingredients — employees actually “build the burger” on their iPhones. The game shows a hamburger bun and various parts (e.g., lettuce, meat, condiments) fall from the sky; the employee has to catch them in the right order to get points. Then another burger comes and another one — increasing in speed (and in point value) as the game progresses.
So, no assessment is needed, if a player gets to the highest level in the game…they know the proper way to make a glamburger. They have to or they can’t get to that level. The assessment, therefore, is the level achieved by the player.
It is important to think about games as assessments and move away from the non-productive approach of multiple choice questions as the only measurement of student or trainee learning.