You’ll often run into situations in debate where you need a basic grasp of statistics and willingness to do the math in order to argue effectively. As a result, here’s a quick overview of statistics and some helpful thoughts on doing the math.
In debate, you’ll often come across inferential statistics. Inferential statistics is using sample data to figure out population data. The “population” is the entire group of people or objects that you want to figure out something about. The “sample” is a smaller sub-group of the population that you use to gather data for convenience’s sake.
What does that mean?
Here’s an example. If I wanted to find out what every debater in a certain debate league thinks about a certain issue, that’d be a difficult task. How do you survey thousands of people without missing any? Instead, I might survey a few debate clubs from that league and extrapolate based on their answers what everyone might think. This smaller group is the sample, the entire group is the population. Get it? Good.
1. (Myth) Sample size is the biggest factor in the accuracy of statistics.
This may seem intuitive, but it’s not quite true. In fact, the most important detail in a statistic is how the sample was collected, not how big the sample is. I could give a complicated reason for this, but let’s just say that smart people found this to be true. As long as the sample is greater than 30, it will predict the population data fairly accurately.
However, if the sample data is collected from a group that isn’t representative of the population, there’s your problem. If I wanted to see what percent of NCFCA debaters have gotten first place speaker, surveying members of Potent Speaking would be inaccurate because the type of people who seek out this info are more likely to get high speaker awards.
2. (Myth) A 5% change in something is significant.
Some people, wanting to quantify their significance topicality argument, have said that in the scientific field, a 5% change is considered significant, so any case that changes less than 5% of the system is insignificant.
That 5% number comes from statistics. And it’s being misused. 5% is a frequently used alpha level, also called significance level. It’s used in hypothesis testing. And if you have no idea what I’m talking about, that’s fine. It basically means that if the significance level is 5%, then a hypothesis is only considered true if you’re 95% confident that’s the case. If the significance level is 10%, then you need to be 90% confident. 5% happens to be the most commonly used significance level.
Feel free to look hypothesis testing up in order to get a more accurate (and lengthy) explanation.
I would say that a case that changes less than 5% of something is probably insignificant, but don’t say that “5% significance is used in science” because that really doesn’t make sense.
3. (Myth) Statistics can be trusted.
Especially when it comes to opinion surveys.
Wikipedia’s article on misuse of statistics reports nearly 12 different ways to misuse statistics.
It’s important to look for who commissioned the data and who collected the data. Who paid for the study? The bias of these people will almost certainly come through in the results.
An example of a commonly used statistic that is false is the claim that “97% of scientists believe that global warming is happening and is caused primarily by humans.” This statistic was gathered from a survey of over 3,000 scientists—but only 79 were considered in this statistic. They were specifically chosen by the guy who came up with this statistic. As plain as this fraud is, the statistic has been quoted by government officials including President Obama himself. (I’m sure they were unaware of the fraud, but it shows that the person saying the statistic doesn’t really matter).
The way the question is asked is extremely important. Consider the difference between these two: “Do you believe we should spend $1,000,000 to fix our roads?” vs. “Do you believe our roads should be fixed?” One question actually includes the downside to the proposition, and is much more likely to get a “no” answer.
Once again, the person collecting the data can easily skew results using a variety of hidden tricks.
1. When attacking your opponent’s statistics, go after the source of the sample, not necessarily the sample size.
2. If the sample size is ridiculously low compared to the population being evaluated (eg. 100 people to determine the entire United States’ opinion), then it’s worth attacking that. A big sample size does help to reduce bias, because it’s harder to choose 4,000 biased people than 100.
3. Make sure the questions that were asked were fair and didn’t make certain types of answers more likely by their very nature.
A couple caveats: Don’t attack every statistic. Only attack those that you think are likely false (they sound wrong), and are important to the debate round. Don’t be unreasonable in your argumentation.
Second, keep in mind that most people won’t have the methodology with them. That’s fine. Depending on the case, you may want to spend some time telling the judge how easily this statistic could be twisted since we don’t know the methodology.
Doing the Math
It is often useful to do some math in a debate round in order to make a point.
- Explaining why your budget proposal is enough to get stuff done.
- Explaining why your budget proposal is not that much money.
- Arguing that the affirmative budget is too small or too big.
- Figuring out if the case is insignificant/significant.
- Finding out what percent of the problem will actually be fixed by their plan.
I’m going to provide some links and tips for doing the math, some of which will be specific to the NCFCA 2015-2016 resolution.
Before I do, a couple of things to keep in mind.
- Always ask the affirmative team for their statistics and numbers. What can they prove?
- Bring a calculator that can’t access the internet in your debate box. (Eg. not a smartphone, an actual calculator). That way you can do quick number checks.
Government spending per minute/hour/day/etc.
$6.85 million spent by the federal government each minute. MINUTE!
$54.8 million spent in a constructive speech (8 minutes). If budget is less than that, you can say “This money will be spent faster than I can finish this speech”.
$411 million spent each hour.
$9 billion, 864 million spent each day.
$69 billion each week.
$3.6 trillion each year.
Plea bargain statistics
97% of federal cases are disposed of by plea bargaining/guilty pleas.
This statistic is useful for plea bargaining discussions, of course, but it can also be useful for significance Topicality. If the Affirmative team changes some kind of trial process, you could point out that’s 3% of the federal court system. (I do not advocate running this significance Topicality against any such case, only against the ones that don’t even significantly change the trial process either. You can thus stack the 3% figure with another significance argument).
Interesting infographic on litigation in general: http://abovethelaw.com/uploads/2012/07/WethePlaintiffs2.jpg
The tort liability price tag for small businesses in 2008 was $105.4 billion dollars.
That number is interpreted as an injustice by the source, but you can use it however you want.
On the other side of the argument, though: “The Rand Institute for Civil Justice, one of the most respected think tanks in the nation, found that only 10 percent of injured people seek compensation and only 2 percent of them file lawsuits. The Rand Institute also found that since 1991, tort cases reflected only six percent of all cases filed.”
The above source also has rebuttals to three common frivolous lawsuit stories.
Don’t be afraid of statistics and numbers, they can be your ally when used correctly.
Don’t miss future posts like this one, subscribe to my email list below!