Taleb actually makes a great point here and published a pretty cool paper about it. The point is that if a probability of the event changes too much, you can arbitrage it. E.g. assume these are payoff odds and you can sell your position before the event. He then found some nice no-arbitrage conditions that "real" probabilities must meet and showed that Nate's predictive timeline allowed arbitrage. Unfortunately Nate never responded to this claim with anything other than "math is hard" and other nonsense.
I don’t have the background knowledge to properly understand the math in that paper. But I do know that you can’t simply set bounds a priori on how much a probability can change after additional information has been gathered; no matter how pathological the swings, you can design a system, and a series of observations of that system, that would make all the forecasts (i.e. conditional probabilities) correct. Thus the paper must be making additional assumptions. As far as I can tell, those include at least that the electoral process can be modeled as Brownian motion, which is a martingale, but is not the only type of martingale; in particular, I’d expect random motion to be a good model of typical polling drift, but a poor model of sudden polling swings caused by news events (which in reality are a large change caused by a single random event, not the sum of a series of small changes caused by independent events that just happen to mostly point in the same direction). I am not sure whether the assumptions also include an estimated value of `s` or volatility; the paper doesn’t seem to explain how it’s calculated, but maybe it can be derived from the raw polling results plus the assumption of Brownian motion? In any case, the paper says nothing explicitly about what data was used to produce the “rigorous updating” graph. I’d love if someone could explain this to me…
Subjectively, it’s hard for me to believe that an unbiased forecast would truly be so utterly noncommittal until just before Election Day, or indeed that there’s enough data to answer that question, especially seemingly from just one election result. But my subjective impressions, of course, could be utterly wrong. I’m very curious whether or not this is the case.
I believe he's making a slightly more subtle point. It's not just that the forecasts are swinging too much, it's that they're swinging too much too early.
Consider an option on a stock (which is the analogy Taleb is making here). If you buy a 1 year call option on AAPL and tomorrow they announce that they beat earnings by 10%, that's not a huge deal for you. If on the other hand, you owned a 1-week expiration call, it is a big deal for you. That is, your prediction should be less sensitive to changes in environment the further out it is.
I believe he is somehow formalizing this statement, and then showing that Nate Silver's forecasts violate it, but I too don't fully understand his formalism.
I think that Taleb's problem is that he thinks all the uncertainty in Silver's model comes from the fact that people might change their minds about who to vote for.
This is why Taleb can't make a model that fits Silver's forecasts. Silver could only be confident early on if he knew that people weren't going to change their minds much. But if people don't change their minds much then Silver's forecast shouldn't fluctuate much as time passes. Alternatively, if the forecast fluctuates a lot, it must be because lots of people are changing their minds. But then Silver shouldn't have been so confident to begin with!
But in fact the uncertainty in Silver's model isn't (wholy) caused by the possibility that people change their minds. It's mostly caused by the possibility of polling error. As we approach the election people have less time to change their minds, but the possibility of polling error doesn't change. This why Taleb can't create a model under which Silver's forecasts are rational; he's not taking into account polling error.
No, it's mostly caused by people changing their minds on who to vote for (or whether to vote). That's why the estimates move around a lot when there is news affecting the election, like the given example of Comey re-opening the Clinton investigation.
Those things may be true...but how does that relate to the issue Taleb is highlighting? That the forecast is overly volatile with respect to arbitrage pricing theory.
That would be only true if the thing you are forecasting becomes more sensitive to changes in the environment as time passes. It is true in your option example but not entirely true for elections.
I would say most people make up their mind more and more with information release over time. Towards the end you know the candidates very well and its hard to change your mind. How many Trump supporters will change their mind even now?
Sam Wang of the Princeton Election Consortium did commit to a prediction, in August 2016, that the 2016 would have some of the lowest polling variability on record. (I think that he declared something like 2.6% uncertainty.)
He seems to miss the point of what 538 is doing. Their nowcasts are attempts to say what would happen if the election was today, which throws away the time-dependent uncertainty.
You can read the paper here:
https://arxiv.org/pdf/1703.06351.pdf