[eDebate] Tangential Thread: Reply to Branson II

Michael Antonucci antonucci23
Wed Mar 7 00:38:26 CST 2007


I'm going to agree with almost all of this; a few
elaborations might have some import, though.

Branson says: 

"I think I would say as a matter of clarification that
when I mentioned my distaste for enforcing
quality-control, I meant it along the lines of the
original Harrison post of 'penalizing' teams for
reading evidence from questionable sources like blogs
etc. I also meant it to indict Dallas's rhetoric of
needing a 'rule' to disbar evidence from certain
sources etc. I think that for judges to enforce rules
such as his is too interventionist and infinitely
regressive, as proven by Scott and my responses to his
Gottlieb etc examples."

My bad, I honestly wasn't reading some of the
preceding conversation carefully.  I agree.

Hard and fast rules are bad; some standards for
prioritization are good.  I currently laugh at cards
that contain sentence fragments or change words
(multilateralism -> multilat, nuclear -> nuc, etc.). 
Laughing at certain categories of sources doesn't seem
like that much of a stretch.

"Similarly, if somebody reads that Harrison card in
front of me, there is 0% chance I'm going to 'disallow
it' or 'penalize it' unless the debaters themselves
bring up the issue of author context/permission and
someone wins that author permission should be
considered a gateway question to inclusion (not
necessarily a tough argument even to win, but one that
I strongly believe must be initiated by the
debaters)."

Total agreement; Harrison's in.  Ironically, I found
out about this whole conversation because I was making
sure that a Lexington team had their Harrison card
before a round.  It wasn't relevant given the flip,
but they would have still read it.

"But for some reason some of what I'm thinking about
strikes me as too interventionist. Imagine that the
aff wins X% risk of their free trade advantage with
Copley nuclear winter as the impact, and the neg wins
a decently higher than X% risk of a DA with a
seemingly credible, scholarly, piece of evidence that
says their impact 'severely exacerbates conflict
pressures and escalation risks in Y region.'

In the policy community, legal community, or academy,
it seems that the neg wins. In a debate, absent strong
impact analysis from the neg, I'd say the aff gets at
least 90% of judges. I know I would probably vote aff.
Would I be comfortable staring down the aff, in the
absence of impact defense or explicit clowning on
Copley News Service staff writers' qualifications to
interpret the resultant geostrategic consequences of
breakdowns in international economic cooperation, and
saying 'sorry, but this evidence isn't qualified, even
though you have won what is basically a conceded
nuclear winter impact vs. amorphous unspecified
increase in conflict pressures. I vote neg.'? I don?t
know, probably not."

[I snipped the Harrison example, because that ties
into my extraneous beliefs about poetic justice.]

"I guess I'm kind of wondering what everybody thinks
is the 'right' thing to do in those circumstances
described above? Is it ok to simply make your own
(even contrary to most debate norms) judgment of that
evidence quality, especially in a big debate? I don't
know that I would feel comfortable with that, even
though there is little doubt in my mind that a
long-term world in which people shifted away from
Harrison cards and Copley News Service cards
and towards 'better' evidence would be on-balance
beneficial."

I'd vote aff.  I don't vote aff, however, under the
assumption that I'm engaging in a "neutral" or
"non-interventionist" reading practice.  I vote aff
for the reasons you described.  I don't want to trip
people up by imposing radical new standards on the
activity without telegraphing them.  Predictability is
important. 

Essentially, I believe in "debate stare decisis." I
think predictability matters, but I don't need to
think that our precedents have some absolute value. If
they did, the activity would be utterly hidebound and
static.

I could telegraph changes.  If I were to do this - I
might, I haven't decided - I'd carefully think out and
describe my standards for the evaluation of evidence,
and post them somewhere for interested parties well in
advance.  I might stack qualification considerably
higher than most debaters currently think they're
stacked. (Somewhere between "font size" and "font
choice.")

"1) Discouraging argument innovation?

Maybe we can go too far with the deification of
'qualified sources' as the end all be all of evidence.
Just because you can't find some credible scholar
making the argument obviously does not mean that it's
not necessarily a good argument. Would this discourage
argument innovation?
Probably....unless debaters just got more innovative
at spinning the 'good' evidence they had, instead
relying on their own arguments to innovate and push
the application of the evidence they already had in
new directions.

I don't know. The direction of this impact is tough
for me: at one extreme you have stagnation, while on
the other you have a lot of the god awful arguments
that win exclusively b/c they're new."

I advanced a "modest proposal" along these lines a
number of years ago, at a worskhop.  I proposed
allowing debaters to write their own evidence, because
an exploding proliferation of incredible claims forces
a closer examination of warrants.

It wasn't particularly well-received, although
blogging's kind of pushing us to that point without
any intervention on my part.

You might be right about innovation.  A couple of
possibilities:

a. The most innovative debaters already roll in strong
on those questions.  Teams that play videos and read
Weekly World News cards already have to overcome your
defaults - they may well benefit from more explicit
and self-conscious statements of defaults.

Current reification simply makes it more difficult to
overcome unstated assumptions for evaluation. 
Realizing the artificiality of those conventions may
actually help West Georgia-types.

b. Innovation doesn't have to shatter the frame.  Deep
and well-developed debates on points of genuine clash
in peer-reviewed literature may not feature innovation
quite as sexy as Ashtar, but it's still innovation.

"2) Risk comparisons become harder."

Is that really bad, though?

Harder could mean:

a. they won't actually bother doing it at all

or

b. they'll do it better

Tough call.

I think this might actually promote some innovation in
risk comparison.  I don't think that relearning the
infinite importance of extinction is really teaching
anyone anything other than shoddy math.

I don't know.  I don't have any great answers for the
questions posed on this thread, but I do think it's
important to frame some of the questions more precisely.


 
____________________________________________________________________________________
The fish are biting. 
Get more visitors on your site using Yahoo! Search Marketing.
http://searchmarketing.yahoo.com/arp/sponsoredsearch_v2.php



More information about the Mailman mailing list