What if point systems for tests were different?

You are not the first person to think about this idea. People use similar ideas in forecasting and your idea is similar to the Brier score.

The premise of this foresting is to be the best at predicting the future. People weight their certainty of predictions over time (e.g., the probability of rain tomorrow is somewhere between 0 and 100%).

Over the short-term, the Brier score penalizes certain wrong answers (e.g., if you say there is a 95% chance of rain but it does not rain, your score is lower than if said there was a 55% chance of rain). But, over the long term, this method penalizes both uncertainty and inaccuracy (e.g., if I consistently make right predictions with 55% probability, I will have a lower score than if I make correct predictions with a 75% probability).

The Good Judgment Project uses Brier scores to evaluate people's ability to forecast and would be a good starting place for you to you look. That being said, I don't know how your proposed idea would work on exams unless you wanted your students to study game theory, forecasting, or predictions rather than your subject material. In general, many if not most education experts think multiple choice questions are a bad way to assess learning (e.g., for some of the reasons described in this article).


One major drawback is answering them all gives points while it does not show that the test-taker knows the answer. It also doesn't discriminate between selecting the two best answers out of five and the correct answer and the most incorrect.

A common way to do this sort of thing is (given 5-answer questions), is to take off 1/4 of a point for incorrect answers and award 1 for a correct answer. Therefore, a student that guesses randomly earns 0 points on average, where as normally they would earn 20% of the points.

The SAT did this when I took it, and I use it occasionally when I write tests for a high school science competition, typically when the material is fairly basic.