Archive for the ‘Evidence’ Category

Missing Pieces – The Skill of Noticing Events that Didn’t Happen
Spotting the Gaps – What Does it Take to Notice the Missing Pieces?


This pair of  short pieces were published a week apart by distinguished decision theorist Gary Klein. Their very-similar titles promise insight into how critical thinkers can be better at noticing absent evidence – things which are not present, or didn’t happen, but which might be just as “telling” for or against various hypotheses as their more salient “present” counterparts.   The advice he provides boils down to two points.  (1) Be experienced.  Experience sets up (often unconscious) expectations, whose violations might capture our attention, or at least create an uneasiness which prompts us to wonder what we’re missing.  (2) Have an active, curious mindset.  This “goes behind what we can see and hear, and starts puzzling when an expected event fails to materialize.”


I have plenty of respect for Klein, but these are disappointing pieces.  They mainly just rehash anecdotes from his earlier work.  He says very little about how experience or an active mindset actually work to help us notice what’s missing, or how to achieve either of these things.  In fact “an active curious mindset” seems to be little more than a redescription of the ability to notice things – barely more satisfying than saying “pay attention!” or “look around for what’s missing!”.  In studying these pieces, I engaged an active curious mindset.  I noticed what was missing: anything of any great insight or use.  Which I know from experience is unusual in Klein’s case.


Read Full Post »

Regarding climate change, there is ”no website that has evidence-based information” that would allow a ”common-sense debate” .

So says the outgoing Governor of the state of Victoria, David de Kretser.  Or at least, this is what he was reported as saying in today’s Age.

I can hardly he believe he really said it.  There are many (how many? I don’t know – but heaps) of websites presenting evidence-based information.   Here are just a few which come to mind quickly:

  • Real Climate – “Climate science from climate scientists”
  • Climate Progress – the legendary blog relentlessly fighting the good fight;
  • Skeptical Science – a wealth of evidence-based information, including detailed responses to standard “denialist” arguments, at three levels of scientific detail, and available on an iPhone/Pad app;
  • Our very own CSIRO’s website section on climate change;
  • The IPCC.

Supposing de Kretser both said it and believed it, his strange assertion calls out for some kind of explanation.   Here’s a couple which seem plausible to me.

1.  de Kretser doesn’t actually surf the web very much.  He doesn’t read online.  He is of the generation that hardly uses computers very much, let alone dwells in the digisphere.  de Kretser thinks there is nothing out there because he hasn’t ventured out there to look.

2. de Kretser is sort of aware that there is at least some good stuff out there.  But he’s working backwards from the fact, seemingly inexplicable to him, that there is so much ignorance, delusion, and apathy in the population.  He tends to believe that when people are exposed to good information, they change their mind accordingly.  Since vast numbers of Australians don’t know and don’t care about climate change, they can’t have been exposed to information.  So there must be a lack of good information.  Maybe we should have a good website!

But this is naive.  It is naive about  individual psychology and how beliefs form and change.  And it is naive about the forces at work in society whose effect (only sometimes deliberate) is to distract, disinform, and confuse.   Possibly, intelligent and ethical scientists such as himself are exposed to, interested in, and form their beliefs on the basis of, good information.  But people such as himself are a tiny minority.

Sir, we don’t need more websites.  We have plenty already, and websites on their own are nearly useless in dealing with the kind of challenges we have – not just the primary challenge of dealing with climate change itself, but the tactical challenge of inducing appropriate change in people’s minds and behaviors.  Please devote your considerable capacities and influence to activities with real impact, not the shifting of pixels on the digital decks of a sinking civilisation.

Read Full Post »

A favorite Dilbert cartoon from a few years back has one character at a restaurant smugly insisting to his dining partner that he would never be so stupid as to provide his credit card details online.  Meanwhile he is paying the bill by handing his credit card to a waiter who disappears with it, supposedly only processing the dinner payment.

The cartoon illustrates how difficult it is to be consistently rational, i.e. rational whenever we should be rational, and being similarly rational in similar situations.   Even highly rational people have blind spots where they are not exercising their rational faculties on matters even they would think call out for rational assessment, and indeed don’t even realise that they are not doing so.

“Highly rational people” includes faculty members on admission committees of prestigious medical schools.

A “Perspective” piece in a recent issue of the top-shelf medical journal The Lancet, by Donald Barr of Stanford, describes how admission committees usually place a heavy emphasis on strong performance in science subjects, supposedly because those who are good at science will become good doctors and vice versa.   Barr decided to examine the evidence for this presumed correlation.   What he found, roughly, is that the evidence pointed the other way: the better you perform in undergraduate sciences, the worse you are as a doctor.  (Of course you should read Barr’s article for a more detailed and nuanced summary of the evidence.)

In other words, faculty members on admission committees of medical schools – the kind of people who would think of themselves as highly rational, who would readily stress the importance of taking proper account of the scientific evidence in medical practice – these faculty members were basing their admission decisions on a belief that was unfounded, erroneous, and harmful to their profession!

Barr’s scathing commentary is worth quoting at length:

If what we seek from our students is professional excellence, we must be careful to base the manner in which we select these students on scientific evidence, not on superstition. Beyond its religious connotation, The Oxford English Dictionary suggests that superstition represents an, “irrational or unfounded belief”.  The belief that one’s knowledge of science represents a continuous metric of the quality of one’s preparation for the study of medicine represents precisely such an “unfounded belief”. There seems to be no scientific evidence to support it. Great physicians base their professional practice on a threshold of scientific knowledge they have acquired throughout their career. Upon this foundation they build an artistic display of communication, compassion, empathy, and judgment. In selecting students for the study of medicine, we must be careful to avoid superstition, and to adhere to the evidence that equates as metrics of quality a preparation in fundamental scientific principles and the non-cognitive characteristics that are conducive to professional greatness.

Note that these medical faculty members’ admissions decisions are in an obvious sense “evidence-based.”  Each applicant would have provided a dossier of information (grade transcripts, letters of recommendation, etc.) and the learned professors would have been taking careful note of that evidence at least.

However, their method of making decisions was not adequately evidence-based. In adopting their method they had implicitly made judgments not  about students themselves but about the criteria for selecting students; and those judgements were not properly based on evidence.

It might be helpful to distinguish first-order from second order evidence based decision.   A first-order evidence based decision is one which properly takes into account the evidence relevant to the decision at hand.  This may include the particular facts and circumstances of the case, as well as more general scientific information.  So for example in a clinical context, the doctor making a first-order evidence-based judgement as to treatment would consider the available information about the patient as well as scientific information about the effectiveness of the treatment being recommended.

Now, taking evidence into account properly implies the existence of some method for finding and evaluating evidence and incorporating it into the final judgement.

A decision is second-order evidence-based when the choice of method is itself properly evidence-based.  Somebody making a decision which is second-order evidence based is considering (or has duly considered) not only the evidence pertaining to the decision at hand, but the evidence pertaining to the choice of method for making that type of decision.

[In theory of course there would be third-order evidence-based decision (evidence-based decisions about how to decide what method to use in making a decision), and so on.]

Donald Barr can be seen as urging that the medical profession be (or be more) second-order evidence-based in their admission decisions.  His recommendation is not that they take scientific data into account in any particular decision.  It is, rather, that they allow scientific evidence to shape their general framework for making admission decisions.

The distinction between first- and second-order evidence based decision is somewhat subtle.  In my experience it can be difficult for people to understand the distinction and appreciate the importance of being second-order evidence-based.

One experience of this, as it happens, was also with medical types.

I was contacted by a doctor – call him Dr. Smith – who was on a state committee whose job it was to spend millions of taxpayer dollars on fancy new medical equipment.  The committee received many applications and it had to decide which of those most deserved funding.  As he described it, the committee was using a relatively lightweight, informal version of standard multi-criteria decision making (list your criteria, weight your criteria, rate each option on each criterion, etc.).  Why were they using this method?  Apparently because “it seemed like the right thing to do” and “that’s they way we do it”.

Dr. Smith was concerned that the way in which the committee was making these decisions was insufficiently rigorous and transparent.  In particular he was worried that the process was not reliable enough, in the sense of treating similar cases the same way.  There was too much room for the slip and slop of unanchored subjective judgments.  This despite the fact that their decisions took into account a large amount of information and even scientific evidence – such as data about the benefits of certain types of treatments and technologies.

However Dr. Smith was a lone voice.  He was trying to persuade the chair of the committee, and other committee members, and the secretariat supporting the committee, that they should spend at least some time thinking not just about who should get the money, but about how they decide who should get the money.

I was invited to speak to the chair and the secretariat.  In a web-conference, I explained that there were many possible ways to make decisions of the kind they had to make.  Indeed there is an academic niche concerned with this very issue, i.e., how to make allocation decisions.   These methods have been subjected to some degree of scientific study, it is clear that some are better than others for particular types of decisions.   I explained that if they were not using the best method, they would likely be mis-allocating resources.  (Which means, in this kind of case, not just wasting money but not saving as many people from sickness and death as they might have.)

Then came the really tricky part.  I tried to gently explain that evidence-based selection of a decision method requires expertise, and not the expertise that they have as doctors, professors of medicine, or even medical administrators.  Rather, it is the expertise of a decision theorist.   Simply put, just being smart and have a fancy medical appointment doesn’t make you qualified to decide how to decide who should get funded.

As you can imagine, the conversation was polite but it went nowhere.   I was trying to explain to senior medical professionals that their decisions could be more rigorously evidence based.  They prided themselves on making evidence-based decisions, and they were making evidence-based decisions, but these were only first-order evidence based.  I was trying to convince them to be second-order evidence based (though I didn’t use those words).  Further, I was suggesting that their expertise wasn’t the sort of expertise that was really needed for their decision to be properly second-order evidence-based.

I had the feeling that they could hear the words but didn’t really “get” what I was saying.  Last I heard, nothing had changed in the way they were making their decisions.

Read Full Post »