2016 Postmortem
In reply to the discussion: Say it ain't so Nate....538 Poll [View all]Loki Liesmith
(4,602 posts)Last edited Sat Sep 17, 2016, 08:46 PM - Edit history (1)
A model should not change when it gets new information. Assuming an initially well fit model It should change when it gets enough relevant information to throw its hypotheses into enough doubt that a new set of hypotheses are warranted.
The art of building a statistical model (and I have built many over the course of my career) is to determine what the threshold is for a change in hypotheses.
If a model designed to predict an event many months out from its inception is to be judged good, it must be judged not only on its accuracy, but its consistency. A better model is one that correctly predicts the outcome and does so over the longest possible span. If I predict the election outcome absolutely correctly a day before the vote, that may be good. Predicting it within a small delta many months before is better.
I could create a model that samples from a white noise distribution for electoral vote outcomes, and there is a decent probability that one of the samples would be very close to the actual outcome. However the time average of predictions would be very far from the actual outcome. Because my model got it right once on the interval, is that a good model?
Look at the time average of a model's predictions. If a model is close immediately before the election and yields a reasonably close time average over outcomes, that is a good model.
I submit there are other election models out there that meet this criteria better than Silver's. He could improve his models greatly, and their stability, with more parsimony and fewer nuisance parameters. It makes things jump around too much.
Also, I am quite aware of what I said in my previous posts. I would hazard I actually know more about what I said that then you do.
Cheers,