[eDebate] 87 Average?

A Numbers Game edebate edebate
Sun Oct 11 23:36:41 CDT 2009

Longer response with charts at:


Shorter inline response, with additional notes, below:

On Tue, Oct 6, 2009 at 1:35 PM, Brian DeLong <bdelo77 at gmail.com> wrote:
> Clearly the results from Kentucky show a large discrepancy between
> pockets of judges in how they are interpreting the 100 point scale.

By at least one measure there is quite a bit more judge variance this
year than at Kentucky tournaments from years past. Where 1.0 would
indicate an equal point scale among all judges, the Kentucky
tournament jumped from 1.54 to 2.30 in the last year. (for details,
see link above)

>?Without some point of consistent measurement to work
> off of we're going to continue to see some fairly decent judges being
> reduced on the pref sheets.

Will Repko made a similar point in 2007, pointing out that Aaron Hardy
gives lower points overall, but he is still preferred. Repko suggests
a lifetime judge variance system to fix the incentive to pref judges
just because they often give high points:

http://www.mail-archive.com/edebate at www.ndtceda.com/msg03271.html


> Maybe it would be wise for us to vote on scales of measurement to set
> a norm for this community.  We have the ability to set up an informal
> or formal voting system.

If we match up the point distributions from Kentucky in 2008 and 2009,
the pool engaged in a vote, in some sense. Matching percentiles:

26.0 is 72    (at ~0.7%)
26.5 is 75    (at ~2.3%)
27.0 is 80    (at ~10.5%)
27.5 is 84.5    (at ~29.5%)
28.0 is 88    (at ~59.7%)
28.5 is 91    (at ~84.7%)
29.0 is 94    (at ~96.3%)
29.5 is 98    (at ~99.8%)

This scale is much higher than the scale suggested below or the one
suggested by Hester. Either one of those scales would be serious point
deflation compared to Kentucky this year. (I'm not suggesting that's a
bad thing) (For charts, see the link at the top of this message)

> With that said, I am on board with voting for a point system that
> looks like this:
> 30-29.6 = 100-96
> 29.5-29.0=95-90
> 28.9-28.5=85-89
> 28.4-28=79-84
> 27.9-27=78-72
> 26.9-26.0=71-60
> Thoughts?

This is pretty close to a very easy to remember system: subtract 20,
multiply by 10. 26.5 becomes 65, 28 becomes 80, etc.

> To respond to number's games observations, As Ross Smith once claimed,
> the most recent scientific data indicates that we naturally cluster
> numbers to help us simplify complex information. ?5 and 10 clustering
> is only inevitable.

I googled around for this, but I couldn't find the right keywords.
(Not that I doubt it; I just wanted to read more)

When USC mentioned 2_.3 and 2_.8 as benchmarks in their 30-by-0.1
scale, it produced clustering around those values. Perhaps a consensus
around a scale like the one you suggest above, with breaks at 71/72,
78/79, 84/85, 89/90, and 95/96, will produce a different set of
cluster points.

Maybe the fact that there's no clustering around whole-point values in
the 30-by-0.5 scale is a sign that a scale change is needed. (If the
other signs aren't convincing)

More information about the Mailman mailing list