How ski jumping gets Olympic judging right (and figure skating gets it wrong)
"Both ski jumping and figure skating have nationalistic judging biases, where judges give higher scores to athletes from their countries. But the sports take very different approaches to dealing with this. Ski jumping has its international federation select the judges for competitions like the Olympics, and I find that they select the least biased judges. Figure skating lets its national federations select the judges, and my research showed that they select the most biased judges.
This creates different incentives for judges. Ski jumping judges display less nationalism in lower-level competitions — it appears they keep their nationalism under wraps in less important contests to avoid missing their chance at judging the Olympics. Figure skating judges are actually more biased in the lesser contests; they may actually be more biased than they would like to be due to pressure from their federations."
"The dysfunctionality of the sport is also revealed by how it reacted to the 2002 judging scandal. The International Skating Union made a couple of sensible reforms, such as increasing the size of the judging panel (at least temporarily) and making the scoring system more objective (although some think they went too far). But most of their response consisted of hiding the evidence of bias. The ISU stopped revealing which judge gave which score, making it much harder for competitors and fans to see whether the judging was fair. The ISU even went back and altered online score sheets from earlier competitions, obfuscating which judge gave which score and even which country each judge represented. They also began randomly dropping scores from three out of 12 judges. As any statistician can tell you, an average of nine out of 12 scores is essentially the average of the 12 scores plus a random number. When Yale statistics professor Jay Emerson noticed that in one case this randomness had altered who won a medal, it appears that the ISU responded by scrambling the order that scores were reported on score sheets. The only plausible purpose of this change was to make it harder to identify cases where randomness had affected results. All this suggests that the focus has been on hiding problems rather than fixing them."
No comments:
Post a Comment