This is something we’ve studied a lot in constructing the FiveThirtyEight model, and it’s something we’ll take another look at before 2016. It may be that pollster “herding” — the tendency of polls to mirror one another’s results rather than being independent — has become a more pronounced problem. Polling aggregators, including FiveThirtyEight, may be contributing to it. A fly-by-night pollster using a dubious methodology can look up the FiveThirtyEight or Upshot or HuffPost Pollster or Real Clear Politics polling consensus and tweak their assumptions so as to match it — but sometimes the polling consensus is wrong.
I find this an interesting point from Nate Silver over at FiveThirtyEight. I think I’ve seen something similar in Oscar contest data that I’ve analyzed. It wasn’t unequivocal but:
The trend lines do seem to be getting closer over time. I suspect... we're seeing that carefully-considered predictions are increasingly informed by the general online wisdom. The result is that Consensus in the contest starts to closely parallel the wisdom of the Internet because that's the source so many people entering the contest use. And those people who do the best in the contest over time? They lean heavily on the same sources of information too. There's increasingly a sort of universal meta-consensus from which no one seriously trying to optimize their score can afford to stray too far.
There are some fancy statistical terms for some of this but fundamentally what’s happening is that information availability, aggregation, and (frankly) the demonstrated success of aggregating in many cases tend to drown out genuine individual insights.
No comments:
Post a Comment