General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsType I and Type II errors, the left, and the right. A perspective from engineering.
It is better that ten guilty persons escape than that one innocent suffer William Blackstone
Connor Wood at patheos just wrote an interesting example of what is becoming a set piece in journalism: explaining the difference between the right and the left as one of fundamental psychology. Attempts to do this are legion, and always focus on different aspects of the divide (language! finality! squeamishness! etc.); this may be a case of the blind bats and the elephant.
But if one more bat can chime in here, the field of systems theory offers a perspective that I think may be a useful one on the difference between the right and the left (and Im deliberately using those terms rather than liberal and conservative). That perspective involves their preference for Type I or Type II errors.
Let me take you back to the nightmare that was my Stochastic Processes, Detection, and Estimation class in grad school for a second and tell you the story the professor used to explain the difference (though already I'm tipping my hand as a leftie by arguing from model rather than anecdote -- it's not that scientists are liberal, but that liberals become scientists). Imagine youre the radar operator during the Battle of Britain in World War II. The Luftwaffe is sending bombers over at unexpected times, and your job is to decide when to scramble the fighters to stop them.
But its not as simple as in the movies. A blip could be a bomber, or it could be a flock of geese. Or it could be an error in the radar itself. Some blips you can be nearly certain are bombers, some you can be nearly certain arent, but most fall into some spectrum in the middle. Lets even say that youve been keeping track of things over time, and you can actually estimate the likelihood that a given kind of blip is a bomber or not. There are a few 99%s and a few 1%s, but most of the blips fall into the 40% to 60% range. You see a blip that weighs in at 45% (that is, you are 45% confident that this blip represents a bomber -- it's called a "confidence percentage" ; what should you do?
Theres two ways you can screw up here: you can scramble fighters when there is no bomber, and you can fail to scramble fighters when there is a bomber. Both have costs associated with them. Scrambling fighters when there isnt a bomber wastes fuel and flight time, which are finite resources, and leaves you more vulnerable to a real bombing raid. This is called a false positive, or Type I error. Failing to scramble fighters when there is a bomber means the bomber gets through and blows up London. This is called a false negative, or Type II error.
This has implications far beyond air mission control, of course; thats just the situation it was first developed for. Consider cancer tests. A Type I error would be a test telling someone without cancer that he had it; this would require unnecessary treatment and waste money, medical staffs time, and equipment, all of which are finite. A Type II error would be a test that told someone with cancer that he didnt have it; this would mean the cancer goes untreated. The cost there is obvious. (Notice that in both the cases I mentioned, the types of cost vary greatly in quality rather than in ways that are easy to quantify this is how it generally works).
My thought is that in general the left fears Type I errors and the right fears Type II errors. How does this apply to politics? Well, take the Blackstone quote I started with. A Type I error in criminal justice is putting an innocent person in jail (false positive on guilt). A Type II error is letting a guilty person go free (a false negative on guilt). The harms of both of these are obviously difficult to compare, but Blackstone actually gave a ratio of 10 to 1: he would accept 10 false negatives if that would prevent 1 false positive. Without any data to back this up, it does (at least to me) seem intuitively right that the left is more worried about jailing the innocent and the right is more worried about not jailing the guilty. (There's no need to be doctrinaire here: neither side actually approves of either situation happening, but see below for why this preference becomes important.)
That number 10 seems arbitrary, but the first question for a lot of people is why tradeoff in the first place? just make sure criminals go to jail and innocent people dont. Unfortunately its not that simple, and heres where systems theory rears its ugly head. I wont go into the math (mostly because I have no idea how to post an integral sign) but even for an optimal detector, if theres any uncertainty (and there always is), there always has to be a tradeoff. A detector more capable of avoiding Type I errors is by that very fact less capable of avoiding Type II errors. In a way it makes sense: ultimately the radar operator is going to have to have a confidence percentage cut off under which he doesnt launch fighters and over which he does. He can ratchet that up, and make false positives less likely, but that will mean letting more bombers through. Or he can ratchet that down, and make false negatives less likely, but hell end up launching fighters at geese more often.
Let me move this to another political issue that really divides the right and left in the US: voter verification. You have some detector thats looking for fraudulent voters (the bombers, cancer, or guilty parties in the language of the other examples). Now, I make this with no polling data, but I think most people would agree: the nightmare scenario for someone on the right is that a person ineligible to vote casts a ballot (Type II; cancer not diagnosed; bomber gets through; criminal goes free); the nightmare scenario for someone on the left is that a person eligible to vote is not allowed to cast a ballot (Type I; cancer falsely diagnosed; fighters scrambled for no reason; innocent person goes to jail). Forget actual numbers of how often both of those happen, think of your liberal and conservative friends and ask yourself which situation would just make them fundamentally more upset. So, currently, states with right-leaning government are making it harder for ineligible voters to vote by (ultimately) ratcheting up the confidence percentage the poll worker needs to allow them. But it is a mathematical necessity that doing that increases the likelihood that an eligible voter is denied a ballot.
Another interesting result from the detection and estimation class is that the real difficulty in establishing a good detector is deciding what good means. To do that, you have to be able to quantify the costs of Type I and Type II errors. I mentioned above theres a qualitative difference to a lot of them, rather than a quantitative difference; its not as if one costs X amount of fuel and the other costs Y amount of fuel one costs fuel and the other drops a bomb on a city. To go back to the bombers, an example comes to mind of a way to look at assigning costs: would you change your required confidence percentage if you found out the bombers had nukes? Youd probably drop it way down and find a way to get more fuel. On the other hand, if they had very small bombs, you might judge its a better idea to push it way up and just find a way to protect your buildings and people better. Or, youd probably find a high false positive rate on a bubonic plague test more acceptable than on a test for the common cold.
So, how do you compare the costs of an innocent person going to jail versus a guilty person going free? Or of an eligible voter being denied a chance to vote versus an ineligible voter being allowed to vote? This is, ultimately, a moral and political question, and this fundamental disagreement about them does seem to track with a fundamental divide in our moral and political landscape.
Those of us on this board, as disparate as our views are, seem pretty much agreed on the Error Type preference. We agree that an innocent person going to jail is worse than a guilty person going free, that a legitimate voter denied the ballot is worse than an illegitimate voter granted one (though this particular case is so rare as to not really matter from a statistical standpoint), or in general that a benefit needlessly denied is worse than a benefit baselessly given. This is why we're DUers and not Freepers. But it also, I think, may be an occasionally enlightening way to look at the conceptual gulf that divides us from them.
EDIT: it's more complex than this, and Mowtown_Johnny called me out on it: you can always cast Type I as Type II by changing what you're looking for. You could say a criminal going free is a false positive on innocence rather than a false negative on guilt. Culturally, however, engineers tend to assume that anything they're trying to detect is what's bad.
DetlefK
(16,423 posts)Recursion
(56,582 posts)PeoViejo
(2,178 posts)but, being a Techie, I find it a worthy read.
RVN VET
(492 posts)It gave me pause to think, and provided me with a thankfully unemotional way of viewing the way the RW and LW differ in their view of problems.
There are, of course, other issues involved. Both LW and RW will naturally lean towards voices in the media and elsewhere that offer support to their basic predilections. However -- and I may be going out on a limb here, but I think it's a sturdy one -- the RW has more bullsh*t to feast on: FOX "News", little FOX "News" at CNN, mass media insistence on giving more than equal airtime to flat earthers (and birthers). As a result the RW becomes more and more convinced of things that are less and less real. A fly on the radar screen causes them to muster all aircraft.
And, of course, the RW political machine has learned that when things go bad because of things they've done, two words redirect the bad vibes: "Thanks, Obama."
Nay
(12,051 posts)and illuminating engineering idea that applies to politics.
Skittles
(153,193 posts)Fumesucker
(45,851 posts)The left is a long way from completely rational but the right is a damn sight further from it at the moment and probably for the foreseeable future.
Just as one example, we hear very little from the right on prosecuting economic criminals who do billions if not trillions of dollars in damage to average people while they rant about criminals who stole food to actually eat because they were hungry.
The right in this country has become more and more about class, race and privilege, rational analysis of threats has very little to do with the way they act any more.
ananda
(28,876 posts)I really want to know.
Fumesucker
(45,851 posts)Obots vs Firebaggers, the battles here have been epic and often anything but completely rational on either side, just as an example.
Good grief, we even have a Conspiracy Theory dungeon on DU.
Man is not so much a rational animal as an animal that rationalizes.
RVN VET
(492 posts)One of the few latin quotes in my quote-bag!
seveneyes
(4,631 posts)The notion that Communism would ever work as intended could be considered irrational. I don't see it suggested as a solution much lately, so it may have fallen off the edge by now.
Recursion
(56,582 posts)We all have our weaknesses.
Recursion
(56,582 posts)An irrational person has an irrational confidence function, but still either prefers judgment or more data.
The right avoids data (Connor gets to that at patheos better than I can) and so is irrational from the perspective of our modern data-driven world, but I think the selector (or, to use the word we use in engineering "discriminator", though obviously that has political baggage) bias is still worth thinking about sometimes.
DetlefK
(16,423 posts)I wonder if republicans go for more purity and democrats go for more efficiency...
Recursion
(56,582 posts)But, yeah, I think we do sometimes value working systems too much, simply for their efficacy, while the right definitely values consistent systems too much simply for their consistency.
Fumesucker
(45,851 posts)This piece in Good Reads rather supports my point I think, the smarter you are the easier it is to fool yourself and as Feynman once remarked, " The first principle is that you must not fool yourself, and you are the easiest person to fool."
http://www.democraticunderground.com/101675192
Recursion
(56,582 posts)Motown_Johnny
(22,308 posts)It has happened twice now, and when he goes into the hospital for more extensive screenings it comes back negative.
He told me Saturday that he is going to start ignoring his positives. I told him to not fall into the "boy who cried wolf" trap because someday there might really be a wolf.
My brother is to the right of center and I am pretty far left of center.
This is just one example but in this example you seem to be wrong. He fears the type 2 and I fear the type 1.
Now, in the health care debate. The left fears not doing something (type 2) and the right fears doing something (type 1).
I would say that in general you are backward. Conservatives fear change and therefore fear taking action. Progressive embrace change and do not fear taking actions.
The one exception is military intervention. Conservatives fear not taking action and Progressives fear taking action.
I don't think voter ID laws are a good example for your analogy because they are actually trying to keep things the way they were before the Civil Rights Act was passed. They have been doing this for decades. They see allowing minorities to vote as the change.
Basically, I am not buying your theory. Sorry,
Recursion
(56,582 posts)You can always re-cast a Type I error as Type II, or vice versa (I could have said a criminal going free is a false positive on innocence, rather than a false negative on guilt). But from an engineering standpoint, we always assume anything that happens is bad, so either you detect the bad thing or you don't.
telclaven
(235 posts)As a stats guy, I have to applaud your analysis. Mind if I lift this for further distribution?
Recursion
(56,582 posts)It's from my blog; the original is here:
http://b.rontosaur.us/2013/10/06/type-i-and-type-ii-errors-the-left-and-the-right/
bemildred
(90,061 posts)Often, one digit of precision is not to be had. And we have a strong bias towards Type I errors, better many false alarms than once getting eaten. And that is very handy from a marketing point of view. Fear can be sold many times.
But in fact, as you point out, a Type I bias is only optimal in a adversarial setting, where you are eater or eaten, it's no way to run a cooperative modern society, you won't be able to compete. Washington DC right now is a perfect example of what that leads to.
zipplewrath
(16,646 posts)A democrat lives in fear that someone, somewhere isn't getting something they need.
A republican lives in fear that someone, somewhere is getting something they don't deserve.
Recursion
(56,582 posts)D23MIURG23
(2,850 posts)For instance, with climate change and other environmental issues the left seems to be the group fearing the type 2 error, and the right seems to fear the type 1.
The right doesn't want to risk any inconvenience to industry unless it is absolutely necessary, whereas the left isn't worried about a few extra regulations to put off a crisis that isn't 100% certain. I guess the error preference is determined by who is most likely to be hurt by the error, and what the damage is most likely to be.
Jim Lane
(11,175 posts)You write, "A detector more capable of avoiding Type I errors is by that very fact less capable of avoiding Type II errors." That's the case frequently (I'll even concede usually) but not always.
For example, if you take the criminal justice system as it existed a few decades ago, and change it by the introduction of DNA-matching technology, the result is an improved detector that's more capable of avoiding both types of errors. (DNA evidence has helped convict some people and helped acquit others, and my guess is that in the overwhelming majority of those cases, the DNA evidence helped lead to the right decision.)
Recursion
(56,582 posts)Note, however, that I said an "optimal" detector. A suboptimal detector can be made more optimal, which may decrease errors of either or both types, but in a given system, even if it' s not optimal, the tradeoff exists in the discriminator. (Best example I can think of: juries are a better system than trials by ordeal; they are closer to optimal. But within the context of whichever system we use, decreasing one type of error will increase the other.)
longship
(40,416 posts)It really comes down to statistics (and now I'm going to get into some technicalities here, too).
Let's take the example of voter fraud. We already know that there is very little voter fraud. Statistics and checks and balances go back many decades; it's what election boards do after all. There are bipartisan observers throughout the process, every step of the way. I have many times served as a volunteer to do this. Many others from all parties have, too.
So given that the data indicates a very low occurrence of fraud, any imperfect system to detect fraud will necessarily have many more false positives than false negatives.
I will repeat that for emphasis. In any imperfect system to detect something (like fraud) when its occurrence is low, there will be many more false positives (detection of fraud when there is no fraud) than false negatives (actual fraud not detected).
This is basic statistics of the system. And it's the same for any other similar system (drug testing, and yes, cancer screening). In any heterogenous population the results must be the same, just from the numbers alone. All imperfect tests (and by nature they are all imperfect) will make the same systematic errors in the same proportions.
Bayes layed this shit down in the 18th century. It was expanded by Gauss (per measurements) later. Prior plausibility is important and cannot be factored out. And there will be scatter in your measurements. If you ignore those, you don't know what you've got.
Great post, my friend.
R&K