Eliezer Yudkowsky claims that Nate Silver erred when calculating that the probability of Trump getting the nomination was 2%.  Silver’s calculation was based on Trump’s needing to pass through six stages and there was only 50% chance of passing each stage.  Yudkowsky believes that Silver should have used the conditional probability of passing each stage given that Trump had passed the previous stages.  For the sixth stage, for example, the probability that he would pass that stage may well be judged much higher than 50%, given that he had succeed in five previous stages.
space
Yudkowsky’s analysis seems relevant to explaining why people – allegedly – commit a basic error of probabilistic judgement, which is to fail to multiply the probabilities of a chain of independent events and hence to overestimate the probability that all events occur.  A standard illustration of this is (as I recall) something like the 10 lock problem.  A safe has 10 locks.  The burglar has a 90% chance of picking each lock.  What is the probability he breaks the safe?  .9^10 = approx .34, and apparently people tend estimate a much higher figure.  This might be explained, in a handwavy way, by saying they anchor on .9, and fail to sufficiently adjust.
 space
However, a more “ecological” approach might seek to understand people’s judgements in terms of how events in fact unfold in the “real” world, or the world they evolved in.  While it is possible to artificially define a situation in which the probability of cracking each lock is, by stipulation, .9, what would happen in the real world is that if you watched somebody trying to crack a safe, and they’d cracked 9 of 10 locks already, you’d think that the safebreaker is so good that they are almost certain to crack the last one.  In other words, you would – in a somewhat Bayesian manner – update your estimate of the safebreaker’s skill as each lock is cracked, and hence the probability of cracking the remaining locks.
 space
When a mathematically unsophisticated person is asked for their answer to the 10 safe problem, it is plausible (this is a testable conjecture) that they imagine the burglar starting with the first lock, probably cracking that, proceeding to the next lock, and so on.  (It seems unlikely they would mentally simulate the entire sequence.)  We know that decision making by mental simulation is a very common strategy (Klein).  This is not quite decision making, but it is similar; the RPD perspective suggests the decision maker mentally simulates one approach to see if it is likely to work, and similarly the naive subject may (start to) mentally simulate the sequence of lock cracking.
 space
This suggests two things.  First, people’s higher than “normative” estimate might be explained by their (in some vague sense) conditionalising the probabilities.  They intuitively judge that the chance of cracking the later locks in the sequence is greater than .9.  Second, depending on the environments they are normally in, this might be the right thing to do.  Or, put another way, the “fallacy” is to resist the idea of purely independent events, and to be quasi Bayesian.  Maybe they are being more rational, in some ecological sense, than the smarty-pants psychologists who try to trip them up.
space
space
space
space