A favorite Dilbert cartoon from a few years back has one character at a restaurant smugly insisting to his dining partner that he would never be so stupid as to provide his credit card details online.  Meanwhile he is paying the bill by handing his credit card to a waiter who disappears with it, supposedly only processing the dinner payment.

The cartoon illustrates how difficult it is to be consistently rational, i.e. rational whenever we should be rational, and being similarly rational in similar situations.   Even highly rational people have blind spots where they are not exercising their rational faculties on matters even they would think call out for rational assessment, and indeed don’t even realise that they are not doing so.

“Highly rational people” includes faculty members on admission committees of prestigious medical schools.

A “Perspective” piece in a recent issue of the top-shelf medical journal The Lancet, by Donald Barr of Stanford, describes how admission committees usually place a heavy emphasis on strong performance in science subjects, supposedly because those who are good at science will become good doctors and vice versa.   Barr decided to examine the evidence for this presumed correlation.   What he found, roughly, is that the evidence pointed the other way: the better you perform in undergraduate sciences, the worse you are as a doctor.  (Of course you should read Barr’s article for a more detailed and nuanced summary of the evidence.)

In other words, faculty members on admission committees of medical schools – the kind of people who would think of themselves as highly rational, who would readily stress the importance of taking proper account of the scientific evidence in medical practice – these faculty members were basing their admission decisions on a belief that was unfounded, erroneous, and harmful to their profession!

Barr’s scathing commentary is worth quoting at length:

If what we seek from our students is professional excellence, we must be careful to base the manner in which we select these students on scientific evidence, not on superstition. Beyond its religious connotation, The Oxford English Dictionary suggests that superstition represents an, “irrational or unfounded belief”.  The belief that one’s knowledge of science represents a continuous metric of the quality of one’s preparation for the study of medicine represents precisely such an “unfounded belief”. There seems to be no scientific evidence to support it. Great physicians base their professional practice on a threshold of scientific knowledge they have acquired throughout their career. Upon this foundation they build an artistic display of communication, compassion, empathy, and judgment. In selecting students for the study of medicine, we must be careful to avoid superstition, and to adhere to the evidence that equates as metrics of quality a preparation in fundamental scientific principles and the non-cognitive characteristics that are conducive to professional greatness.

Note that these medical faculty members’ admissions decisions are in an obvious sense “evidence-based.”  Each applicant would have provided a dossier of information (grade transcripts, letters of recommendation, etc.) and the learned professors would have been taking careful note of that evidence at least.

However, their method of making decisions was not adequately evidence-based. In adopting their method they had implicitly made judgments not  about students themselves but about the criteria for selecting students; and those judgements were not properly based on evidence.

It might be helpful to distinguish first-order from second order evidence based decision.   A first-order evidence based decision is one which properly takes into account the evidence relevant to the decision at hand.  This may include the particular facts and circumstances of the case, as well as more general scientific information.  So for example in a clinical context, the doctor making a first-order evidence-based judgement as to treatment would consider the available information about the patient as well as scientific information about the effectiveness of the treatment being recommended.

Now, taking evidence into account properly implies the existence of some method for finding and evaluating evidence and incorporating it into the final judgement.

A decision is second-order evidence-based when the choice of method is itself properly evidence-based.  Somebody making a decision which is second-order evidence based is considering (or has duly considered) not only the evidence pertaining to the decision at hand, but the evidence pertaining to the choice of method for making that type of decision.

[In theory of course there would be third-order evidence-based decision (evidence-based decisions about how to decide what method to use in making a decision), and so on.]

Donald Barr can be seen as urging that the medical profession be (or be more) second-order evidence-based in their admission decisions.  His recommendation is not that they take scientific data into account in any particular decision.  It is, rather, that they allow scientific evidence to shape their general framework for making admission decisions.

The distinction between first- and second-order evidence based decision is somewhat subtle.  In my experience it can be difficult for people to understand the distinction and appreciate the importance of being second-order evidence-based.

One experience of this, as it happens, was also with medical types.

I was contacted by a doctor – call him Dr. Smith – who was on a state committee whose job it was to spend millions of taxpayer dollars on fancy new medical equipment.  The committee received many applications and it had to decide which of those most deserved funding.  As he described it, the committee was using a relatively lightweight, informal version of standard multi-criteria decision making (list your criteria, weight your criteria, rate each option on each criterion, etc.).  Why were they using this method?  Apparently because “it seemed like the right thing to do” and “that’s they way we do it”.

Dr. Smith was concerned that the way in which the committee was making these decisions was insufficiently rigorous and transparent.  In particular he was worried that the process was not reliable enough, in the sense of treating similar cases the same way.  There was too much room for the slip and slop of unanchored subjective judgments.  This despite the fact that their decisions took into account a large amount of information and even scientific evidence – such as data about the benefits of certain types of treatments and technologies.

However Dr. Smith was a lone voice.  He was trying to persuade the chair of the committee, and other committee members, and the secretariat supporting the committee, that they should spend at least some time thinking not just about who should get the money, but about how they decide who should get the money.

I was invited to speak to the chair and the secretariat.  In a web-conference, I explained that there were many possible ways to make decisions of the kind they had to make.  Indeed there is an academic niche concerned with this very issue, i.e., how to make allocation decisions.   These methods have been subjected to some degree of scientific study, it is clear that some are better than others for particular types of decisions.   I explained that if they were not using the best method, they would likely be mis-allocating resources.  (Which means, in this kind of case, not just wasting money but not saving as many people from sickness and death as they might have.)

Then came the really tricky part.  I tried to gently explain that evidence-based selection of a decision method requires expertise, and not the expertise that they have as doctors, professors of medicine, or even medical administrators.  Rather, it is the expertise of a decision theorist.   Simply put, just being smart and have a fancy medical appointment doesn’t make you qualified to decide how to decide who should get funded.

As you can imagine, the conversation was polite but it went nowhere.   I was trying to explain to senior medical professionals that their decisions could be more rigorously evidence based.  They prided themselves on making evidence-based decisions, and they were making evidence-based decisions, but these were only first-order evidence based.  I was trying to convince them to be second-order evidence based (though I didn’t use those words).  Further, I was suggesting that their expertise wasn’t the sort of expertise that was really needed for their decision to be properly second-order evidence-based.

I had the feeling that they could hear the words but didn’t really “get” what I was saying.  Last I heard, nothing had changed in the way they were making their decisions.