[eDebate] rulebreaking doesn't win a lot of debates and there will
Thu Jun 21 10:18:01 CDT 2007
While I've been content to observe most of this discussion with polite amusement, I think it is important to address Scott's observation that MPJ has significant impact on how the whole topical vs. anti-topical, policy vs. K, traditional vs. performance etc. divide works itself out.
Scott appears to make a couple of claims. 1) MPJ makes it possible for teams to debate in their own cocoons in prelims without the concern that judges will punish their choice of strategies. 2) In elims, "conflict of civilization" debates are determined by the ideological predilections of whatever type of judges form the ideological majority on the panel. The suggestion was that some arbitrary tweak in the Larson algorithm could/would have changed the outcome of debates at the NDT.
Quite apart from the fact that the NDT does not use the "Larson algorithm" and, in fact, does not rely solely on automated machine placement of judges at all, both of Scott's claims deserve comment.
Regarding prelims, it is true on face that in rounds with two policy teams or two K teams or two performance teams ... the fact that they will likely have a judge sympathetic to their overall approach creates a cozier dynamic than if they were randomly assigned a critic that might not share their approach. But the real test is what happens when "clash of civilization" debates occur in prelims, something that is perhaps just as likely as in elims. In back-channels I get some interesting observations about how MPJ presumably impacts this situation. Some teams with statistically less chosen strategies complain that MPJ permits mainstream teams to freeze the most potentially sympathetic judges out of the debate because they can rank them low. Oddly enough, mainstream teams argue just as passionately that MPJ gives alternative perspectives TOO MUCH latitude or protection because debates will be judged by the judges in the middle rather than the more statistically prominent mainstream judges.
I actually suspect that MPJ has given at least some psychological comfort to teams that for whatever reason don't identify with the mainstream. That might be based on the choice of argumentation strategy, lack of program prestige, ethnicity, etc.
But the key observation - directly in conflict with Scott's prediction - is that MPJ differences appear to have NO statistically significant impact on the outcome of the debate, AT LEAST within the degree of difference permitted in tournaments using either Edwards' or my methodologies (editorial note - it would be a fascinating study to collect preference data at a tournament and then assign judges randomly to see if that would have any impact on outcomes :-). When I attempt to statistically predict the outcome of any individual prelim (or ELIM) debate, only ONE variable successfully explains almost all the variance. When all factors are controlled in a multiple regression, the winner of any individual debate is predicted by which team has the best aggregate record in the other seven prelim debates (and within that POINTS actually proves to be a slightly better predictor than RECORD in those debates where different predictions would be made). Although we worry a lot about side assignment, it has rather low predictive power. MPJ differences not only fail to have statistically predictive power, but somewhat ironically often have a very slightly NEGATIVE correlation with outcome. All things considered, the team that prefers the judge (or the panel) LESS is very slightly MORE likely to win the debate.
All in all, this is a novel concept. At the end of the day debates are likely to be won by the BETTER team (as measured by performance in all rounds at the tournament).
During my vacation this summer, I will be performing a number of analyses on the data from this year's NDT that will rigorously test the claims that have been made about MPJ impact on outcomes. I suspect that it will tell a similar story.
More information about the Mailman