[eDebate] Tangential Thread: Reply to Hall, etc.

Michael Antonucci antonucci23
Tue Mar 6 23:51:20 CST 2007


I'll reply to Branson, with whom I agree about 99.9%
on these issues, in a second.

The major issue at stake in many of these responses
seems to be "interventionism."  I think the ringing
endorsements of illusions of neutrality may miss my
point.  

Evidence reading is necessarily an intervention. 
"Should I intervene?" simply isn't a live question. 
You do intervene when you read cards.  You can't help
it.  It's just a question of how you choose to do so.

The reading techniques that we currently consider
"neutral" are anything but.  They are artificial,
stylized, and frankly a little bit weird.  We value
claims, or predictions, over warrants.  We favor
specific phrases and tropes, even - clean binaries,
simple spatial analogies, and some fairly violent
imagery generate a warm reception, for pretty
subjective reasons.

When reading evidence, judges generally apply a series
of tests.  How current is the evidence?  Does it make
a strong claim?  Does it have a warrant?  Does it
appear to respond to the opponent's claim?  Does it
have totally sweet violent imagery?  Does it have
supersweet 24 point font?  Is that font Copperplate,
which is clearly the best font ever?

Judges must make their own determination of how to
apply these sets of standards, or whether they should
apply them at all.

*Please note: This is not a normative argument.  It is
purely descriptive.  I am not describing the world as
it should be.  I'm describing the world as it is. 
Intervention during evidence comparison isn't "good"
or "bad" - it's a structural necessity.*

I don't want to get needlessly abstract or
theoretical, but I think this is a fabulous example of
"reification."  Extremely artificial practices seem
neutral, objective, natural and predetermined simply
by virtue of their place in an ideological edifice. 
That isn't a "critique" per se.  The way we read
evidence right now might be totally sweet, but the
idea that we're stuck with a given set of reading
techniques seems kinda shady.  Valorizing predictive
claims over warrants or qualifications isn't any more
"objective" or "neutral" than the reverse.

We would all agree that debaters should be able to
modify our default reading techniques over the course
of a round.  That doesn't really answer the questions
raised here, though, because good judges in close
rounds will still have to resort to some of their
defaults.  Fully unpacking the reading techniques that
comprise any decision in a close round would take more
than six minutes.

We can shift a good deal of the onus for debating out
quality of evidence onto debaters, but it's simply
impossible to build the full scaffolding for every
meta-standard in a 2NR.

Brad's post:

"I think the best solution is that debaters (for the
few who bother to read this) should be more emphatic
about comparing evidence and making logical arguments.
Judges are extremely reluctant to impose their own
standards on debates (and rightly so), but debaters
have free reign to attack the quality of their
opponents evidence."

I'd have to disagree with the assertion that "judges
are extremely reluctant to impose their own standards
on debates."  I think that's a bit of fallacy when it
comes to reading evidence.  Debate judges favor a
particular style of rhetoric.  They need to weight the
relative value of warrant over claim, and determine
what constitutes a "good" warrant.

I think what's *really* going on is that debate judges
generally default to community standards when reading
evidence.  They impose standards, but they would
prefer not to impose highly idiosyncratic standards,
for the sake of predictability.  I know that I fall
into these category, and that appears to be Branson's
self-description.  This stance makes good sense; I'll
get to it in a sec.

Often, different standards in evidence evaluation get
played out in terms of efficiency.  You can win a
generally accepted meta-standard (dates matter in
politics debates, strong prescriptive claims are
crucial, etc.) much more *QUICKLY* than you can win a
more controversial meta-standard.  This makes an
enormous difference in a time-pressured rebuttal.  You
might be able to win either claim in an untimed
vacuum, of course, but the need to invest an extra
paragraph matters.

Brad continues:

"From a judging perspective, only a few possible
options appear:
a) intervene (the community consensus seems to agree
this is an undesirable option)"

Wait...what?  I think the community train that you
describe just left a bunch of us at the station, Brad.

"Intervention" is a dirty word that really obscures
this issue.

The accepted reading techniques appear neutral.  They
are not.  Reading - interpretation of a card -
necessarily intervenes.  Absent dropped arguments,
there's no "objective" standard for resolving the
quality of two contrasting pieces of evidence.

Judges should be fair.  They should be predictable. 
There's no "objective" standard for resolving a
debate, though.  David Heidt is a decent example. 
He's a highly preferred judge; almost all debaters and
coaches concur that his method is predictable and
fair.  I believe it's accurate to say that it's
largely derived from community consensus, though, not
some kind of mathematical debate proof.

"d) begin to enforce this quality control on your own
teams. I am not accusing NU or Michigan of having bad
cards, but for everyone out there reading these posts
and agreeing with the general decline of quality
evidence and arguments in debate, take a look at the
files your own teams turn out and ask yourself if you
are part of the problem. I try my best to do this,
although I don't speak Korean so I can't read any of
Seungwon or Doowon's files."

Debate coaches serve a number of functions, but let's
not kid ourselves.  Anyone who really enjoys the game
knows it's more fun when you play to win.

I'm not going to push debaters I work with to read
full quals for some sort of moral victory. 
Qualifications will become important when judges stack
qualification questions higher in their queue for
evaluating evidence. Coaches really need help from
judges to impart lessons about evidence quality.

I so want to take the Won-joke bait, but I won't. 


 
____________________________________________________________________________________
Expecting? Get great news right away with email Auto-Check. 
Try the Yahoo! Mail Beta.
http://advision.webevents.yahoo.com/mailbeta/newmail_tools.html 



More information about the Mailman mailing list