Feeds:
Posts
Comments

I forwarded to Paul Monk a link to this video:

He replied, within minutes:

Truly awesome.

It prompts the thought that the biggest revolutions in worldview have been scientific and have entailed:

1. Moving from the Earth centred (Aristotelian/Biblical) cosmology (which had its counterparts in many tribal myths and the cosmogonies of many other civilizations; though the classical Atomists began to guess at the truth and this was picked up again by Giordano Bruno in the late 16th century, only to get him burned alive in Rome by the Inquisition) to first a heliocentric one, then a Milky Way one, then a Hubble 3D one, as it were, and finally to a multiverse one;

2. Discovering that we are evolved creatures and have a direct biological ancestry going back 3.8 billion years, but on a world that, in much less than that time into the future (regardless of what we do) will become uninhabitable, as the Sun swells to become a red giant and destroys the Goldilocks Zone which makes life on Earth possible;

3. Realizing that we live in and are imbricated in a world of microbes that used to dominate the planet, exist in a highly complex symbiosis with larger life forms, including predation upon them and have played a substantial role in the mass extinctions.

4. Slowly getting to understanding human history from a global and cosmopolitan perspective instead of from narrowly local ones; and

5. Developing the elements of a universal cognitive humanism with the exploration of languages and linguistics, comparative mythology (Levi-Strauss and structuralism) and anthropology (including Durkheim’s Elementary Forms of the Religious Life, about a century ago).

My own worldview, if you like, is that all these things transcend (trump) the epistemological claims of the old religions and mythologies, as well as those even of 19th century political ideologies (to say nothing of crude 20th century ones such as Nazism and Marxism-Leninism). BUT the vast majority of human beings on the planet know almost nothing of all this and certainly have not been able to weave it together into a coherent new, shared, universal worldview for the 21st century.

Just a few thoughts on the run, or rather while viewing Andromeda.

In our consulting work we have periodically been asked to review how judgments or decisions of a particular kind are made within an organisation, and to recommend improvements.  This has taken us to some interesting places, such as the rapid lead assessment center of a national intelligence agency, and recently, meetings of coaches of an elite professional sports team.

On other occasions, we have been asked to assist a group to design and build, more or less from scratch, a process for making a particular decision or set of decisions (e.g., decisions as to what a group should consider itself to collectively believe).

Both types of activity involve thinking hard about what the current/default process is or would be, and what kind of process might work more effectively in a given real-world context, in the light of what academics in fields such as cognitive science and organisational theory have learned over the years.

This sounds a bit like engineering.  My favorite definition of the engineer is somebody who can’t help but think that there must be a better way to do this.  A more comprehensive and workmanlike definition is given by Wikipedia:

Engineering is the application of scientific, economic, social, and practical knowledge in order to invent, design, build, maintain, research, and improve structures, machines, devices, systems, materials and processes.

The activities mentioned above seem to fit this very broad concept: we were engaged to help improve or develop systems – in our case, systems for making decisions.

It is therefore tempting to describe some of what we do as decision engineering.  However this term has been in circulation for some decades now, shown in this Google n-gram:

decisionengineering

and its current meaning or meanings might not be such a good fit with our activities.  So, I set about exploring what the term means “out there”.

As usual in such cases, there doesn’t appear to be any one official, authoritative definition.  Threads appearing in various characterizations include:

While each such thread clearly highlights something important, my view is that individually they are only part of the story, and collectively are a bit of a dog’s breakfast.  What we need, I think, is a more succinct, more abstract, and more unifying definition.  Here’s an attempt, based on Wikipedia’s definition of engineering:

Decision engineering is applying relevant knowledge to design, build, maintain, and improve systems for making decisions.

Relevant knowledge can include knowledge of at least three kinds:

  • Theoretical knowledge from any relevant field of inquiry;
  • Practical knowledge (know-how, or tacit knowledge) of the decision engineer;
  • “Local” knowledge of the particular context and challenges of decision making, contributed by people already in or familiar with the context, such as the decision makers themselves.

System is of course a very broad term, and for current purposes a system for making decisions, or decision system, is any complex part of the world causally responsible for decisions of a certain category.  Such systems may or may not include humans.  For example, decisions in a Google driverless car would be made by a complex combination of sensors, on-board computing processors, and perhaps elements outside the car such as remote servers.

However the decision processes we have worked on, which might loosely be called organisational decision processes, always involve human judgement at crucial points.  The systems responsible for such decisions include

  • People playing various roles
  • “Norms,” including procedures, guidelines, methods, standards.
  • Supporting technologies ranging from pen and paper through sophisticated computers
  • Various aspects of the environment or context of decision making.

For example, a complex organisational decision system produces the monthly interest rate decisions of the Reserve Bank of Australia, as hinted at in this paragraph from their website:

The formulation of monetary policy is the primary responsibility of the Reserve Bank Board. The Board usually meets eleven times each year, on the first Tuesday of the month except in January. Hence, the dates of meetings are well known in advance. For each meeting, the Bank’s staff prepare a detailed account of developments in the Australian and international economies, and in domestic and international financial markets. The papers contain a recommendation for the policy decision. Senior staff attend the meeting and give presentations. Monetary policy decisions by the Reserve Bank Board are communicated publicly shortly after the conclusion of the meeting.

and described in much more detail in this (surprisingly interesting) 2001 speech by the man who is now Governor of the Reserve Bank.

In most cases, decision engineering means taking an existing system and considering to how improve it.  A system can be better in various ways, including:

  • First and foremost, improving the decision hit rate, i.e. the proportion of decisions which are correct in the sense of choosing an optimal or at least satisfactory path of action;
  • More efficient in the sense of using less resources or producing decisions more quickly
  • More transparent or defensible.

Now, in order to improve a particular decision system, a decision engineer might use approaches such as:

  • Bringing standard engineering principles and techniques to bear on making decisions
  • Using more structured decision methods, including the application of decision analysis techniques
  • Basing decisions on “big data” and “data science,” such as predictive analytics

(i.e., the “threads” listed above).  However the usefulness of these approaches will depend very much on the nature of the decision challenges being addressed.  For example, if you want to improve how elite football coaches make decisions in the coaching box on game day, you almost certainly will not introduce highly structured decision methods such as decision trees.

In short, I like this more general definition of decision engineering (in four words or less, building better decision systems) because it seems to get at the essence of what decision engineers do, allowing but not requiring that highly technical, quantitative approaches might be used.  And it accommodates my instinct that much of what we do in our consulting work should indeed count as as a kind of engineering.

Whether we would be wise to publicly describe ourselves as decision engineers is however quite another question – one for marketers, not engineers.

About a month ago The Age published an opinion piece I wrote under the title “Do you hold a Bayesian or Boolean worldview?“.  I had submitted it under the title “Madmen in Authority,” and it opened by discussing two men in authority who are/were each mad in their own way – Maurice Newman, influential Australian businessman and climate denier, and Cuban dictator Fidel Castro.  Both men had professed to be totally certain about issues on which any reasonable person ought to have had serious doubts given the very substantial counter-evidence.

Their dogmatic attitudes seemed to exemplify a kind of crude epistemological viewpoint I call “Booleanism,” in contrast with a more sophisticated “Bayesianism”. Here is the philosophical core of the short opinion piece:

On economic matters, Keynes said: “Practical men who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist. Madmen in authority, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back.”

Similarly, on matters of truth and evidence, we are usually unwittingly beholden to our background epistemology (theory of knowledge), partially shaped by unknown theorists from centuries past.

One such  theory of knowledge we can call Boolean, after the 19th century English logician George Boole.  He was responsible for what is now known as Boolean algebra, the binary logic which underpins the computing revolution.

In the Boolean worldview, the world is organised into basic situations such as Sydney being north of Melbourne. Such situations are facts. Truth is correspondence to facts. That is, if a belief matches a fact, it is objectively true; if not, it is objectively false. If you and I disagree, one of us must be right, the other wrong; and if I know I’m right, then I know you’re wrong. Totally wrong.

This worldview underpins Castro’s extreme confidence.  Either JFK was killed by an anti-Castro/CIA conspiracy or he wasn’t; and if he was, then Castro is 100 per cent right. Who needs doubt?

An alternative  theory of knowledge has roots in the work of another important English figure, the Reverend Thomas Bayes. He is famous for Bayes’ Theorem, a basic law of probability governing how to modify one’s beliefs when new evidence arrives.

In the Bayesian worldview, beliefs are not simply true or false, but more or less probable. That is, we can be more or less confident that they are true, given how they relate to our other beliefs and how confident we are in them. If you and I disagree about the cause of climate change, it is not a matter of me being wholly right and you being wholly wrong, but about the differing levels of confidence we have in a range of hypotheses.

Scientists are generally Bayesians, if not self-consciously, at least in their pronouncements. For example, the IPCC refrains from claiming certainty that climate change is human-caused; it says instead that it has 95 per cent confidence that human activities are a major cause.

On Thursday 9th October I’m doing a presentation at a conference of The Tax Institute, the Australian professional association for tax specialists, introducing decision analysis techniques.  The presentation will illustrate (with live demonstration) the following applications:

  • Using quantitative risk analysis (Monte Carlo simulation) to help a client gain better insight into the probable or possible outcomes of a certain tax strategy;
  • Using decision trees to help a client decide whether to purse a dispute with the Tax Office through the courts.

The conference paper is available here.

An excerpt:

Decision analysis techniques are well-developed and used, more or less widely, in various other professions such as engineering and finance. However, they are rarely used by tax specialists, or by lawyers and accountants more broadly.

Why? One perspective is that decision analysis is fundamentally ill-suited to the kinds of reasoning and decision making involved in tax matters, which are thought to involve unquantifiable issues and nuances requiring intuitive nous of the kind only highly trained and experienced legal or accountants can provide.

An alternative perspective is that tax matters would almost always benefit from decision analysis, and that tax specialists fail to use it only because they are trapped behind boundaries imposed by their professional traditions, their training, and their intellectual inertia. A strong version of this view is that tax specialists are derelict in failing to provide their clients with an easily-obtainable level of clarity and rigour.

In the spirit of John Stuart Mill, this paper takes a middle position. It suggests that decision analysis is potentially useful for certain types of problems regularly handled by tax specialists, while not being appropriate for many others. Decision analysis may represent an important opportunity for tax specialists to provide greater value to sophisticated clients.

1.2.1 Three Thinking Modes

At a high level, the relation of decision analysis to the kinds of intellectual labour generally undertaken by tax specialists is summarized in this diagram.

threemodes

To indulge in some useful caricatures, qualitative thinking is the domain of the lawyer. It uses no numbers at all, or at most simple arithmetic. The central concept is the argument. Making the most important decisions is always a matter of “weighing up” arguments expressed in the legally-inflected natural language

Quantitative deterministic thinking is the speciality of the accountant. It is epitomised in the structures and calculations in an ordinary spreadsheet, in which specified inputs are “crunched” into equally specific outputs. The central concept is calculation; uncertainties are replaced by “assumptions”. Decisions generally boil down to comparing the magnitudes of numerical outputs, in the penumbral light cast by the background knowledge, intuitions and biases of the decision maker.

The third mode of thinking, probabilistic, is of course the decision analyst’s territory. The central concepts is uncertainty, and the essential gambit is framing and manipulating probabilistic representations of uncertainty.

In this context, the “master” tax specialist has facility, or even advanced expertise, in all three modes of thinking.

There’s a familiar idea from the world of sport – that winning requires an elite team and not just a team of elite players.

Does something similar apply in the world of decision making?

In many situations, critical decisions are made by small groups.  The members of these groups are often “elite” in their own right.  For example, in Australia monthly interest rate decisions are made by the board of the Reserve Bank of Australia.  This is clearly a “team” of elite decision makers.

However it is not clear that they are an elite team of decision makers.   For current purposes, I define an elite decision team as a small decision group conforming to all or at least most of the following principles:

  1. The team operates according to rigorously thought-through decision making practices. Wherever possible these practices should be strongly evidence-based.
  2. The team has been trained to operate as a team using these practices. Members have well-defined and well-understood roles.
  3. Members have been rigorously trained as decision makers (and not just as, say, economists).
  4. The team, and members individually, are rigorously evaluated for their decision making performance.
  5. There is a program of continuous improvement.

Note also that the team should be a decision making team, i.e. one that makes decisions (commitments to courses of action) rather than judgements of some other kind such as predictions.

There are many types of teams which do operate according to analogs of these principles – for example elite sporting teams, as mentioned, and small military teams such as bomb disposal squads.  These teams’ operations involve decision making, but they are not primarily decision making teams.

I doubt the Board of the RBA is an elite decision team in this sense, but would be relieved to find out I was wrong.

More generally, I am currently looking for good examples of elite decision teams.  Any suggestions are most welcome.

Alternatively, if you think this idea of an elite decision team is somehow misconceived, that would be interesting too.

Well-known anti-theist Sam Harris has posted an interesting challenge on his blog.  He writes:

So I would like to issue a public challenge. Anyone who believes that my case for a scientific understanding of morality is mistaken is invited to prove it in under 1,000 words. (You must address the central argument of the book—not peripheral issues.) The best response will be published on this website, and its author will receive $2,000. If any essay actually persuades me, however, its author will receive $20,000,* and I will publicly recant my view. 

In the previous post on this blog, Seven Habits of Highly Critical Thinkers, habit #3 was Chase Challenges.  If nothing else, Harris’ post is a remarkable illustration of this habit.

The quality of his case is of course quite another matter.

I missed the deadline for submission, and I haven’t read the book, and don’t intend to, though it seems interesting enough. So I will just make a quick observation about the quality of Harris’ argument as formulated.

In a nutshell, simple application of argument mapping techniques quickly and easily show that Harris’ argument, as stated by Harris himself on the challenge blog page, is a gross non-sequitur, requiring, at a minimum, multiple additional premises to bridge the gap between his premises and his conclusions.  In that sense, his argument as stated is easily shown to be seriously flawed.

Here is how Harris presents his argument:

1. You have said that these essays must attack the “central argument” of your book. What do you consider that to be?
Here it is: Morality and values depend on the existence of conscious minds—and specifically on the fact that such minds can experience various forms of well-being and suffering in this universe. Conscious minds and their states are natural phenomena, fully constrained by the laws of the universe (whatever these turn out to be in the end). Therefore, questions of morality and values must have right and wrong answers that fall within the purview of science (in principle, if not in practice). Consequently, some people and cultures will be right (to a greater or lesser degree), and some will be wrong, with respect to what they deem important in life.

This formulation is short and clear enough that creating a first-pass argument map in Rationale is scarcely more than drag and drop:

Harris2

Now, as explained in the second of the argument mapping tutorials, there are some basic, semi-formal constraints on the adequacy of an argument as presented in an argument map.

First, the “Rabbit Rule” decrees that any significant word or phrase appearing in the contention of an argument must also appear in at least one of the premises of that argument.  Any significant word or phrase appearing in the contention but not appearing in one of the premises has suddenly appeared out of thin air, like the proverbial magician’s rabbit, and so is informally called a rabbit.  Any argument with rabbits is said to commit rabbit violations.

Second, the Rabbit Rule’s sister, the “Holding Hands Rule,” decrees that any significant word or phrase appearing in one of the premises must appear either in the contention, or in another premise.

These rules are aimed at ensuring that the premises and contention of an argument are tightly connected with each other.  The Rabbit Rule tries to ensure that every aspect of what is claimed in the contention is “covered” in the premises.  If the Rabbit Rule is not satisfied, the contention is saying something which hasn’t been even discussed in the premises as stated.  (Not to go into it here, but this is quite different from the sense in which, in an inductive argument, the contention “goes beyond” the premises.) The Holding Hands Rule tries to ensure that any concept appearing in the premises is doing relevant and useful work.

Consider then the basic argument consisting of Contention 1 and the premises beneath it.   It is obvious on casual inspection that much – indeed most – of what appears in Contention 1 does not appear in the premises.  Consider for example the word “purview”, or the phrase “falls within the purview of science”.  These do not appear in the premises as stated. What does appear in Premise 2 is “natural phenomena, fully constrained by the laws of the universe”.  But as would be obvious to any philosopher, there’s a big conceptual difference between these.

What Harris’ argument needs, at a very minimum, is another premise.  My guess is that it is something like “Anything fully constrained by the laws of the universe falls within the purview of science.”   But two points.  First, this suggested premise obviously needs (a) explication, and (b) substantiation.  In other words, Harris would need to argue for it, not assume it. Second, it may not be the Harris’ preferred way of filling gaps (one of them, at least) between his premises and his conclusion.  Maybe he’d come up with a different formulation of the bridging premise.  Maybe he addresses this in his book.

It would be tedious to list and discuss the numerous Rabbit and Holding Hands violations present in the two basic arguments making up Harris’ two-step “proof”.   Suffice to say, that if both Rabbit Rule and Holding Hands Rule violations are called “rabbits” (we also use the term “danglers”), then his argument looks a lot like the famous photo of a rabbit plague in the Australian outback:

australianrabbits

Broadly speaking, fixing these problems would require quite a bit of work:

  • refining the claims he has provided
  • adding suitable additional premises
  • perhaps breaking the overall argument into more steps.

Pointing this out doesn’t prove that his main contentions are false.  (For what little it is worth, I am quite attracted to them.)  Nor does it establish that there is not a solid argument somewhere in the vicinity of what Harris gave us. It doesn’t show that Harris’ case (whatever it is) for a scientific understanding of morality is mistaken.  What it does show is that his own “flagship” succinct presentation of his argument (a) is sloppily formulated, and (b) as stated, clearly doesn’t establish its contentions.   In short, as stated, it fails.  Argument mapping reveals this very quickly.

Perhaps this is why, in part, there is so much argy bargy about Harris’ argument.

Final comment: normally I would not be so picky about how somebody formulated what may be an important argument.  However in this case the author was pleading for criticism.

Some people excel at critical thinking; others, not so much. Scientist Carl Sagan and investor Charlie Munger are oft-mentioned exemplars; my friend and colleague Paul Monk is less famous but also impressively sharp. On the other side we have… well, Homer Simpson can stand in for all those it would be rude to name.

But what makes a thinker more highly critical than others? And how can any person lift their game? This can be explored through the notion of habits. Highly critical thinkers have developed many habits which help them think more effectively. With sufficient commitment and patience, and perhaps a little coaching, such habits can be acquired by the rest of us.

This post describes seven major habits of highly critical thinkers. The list is obviously inspired by the hugely successful book about highly effective people. Whatever one might think of that book, if a similar exercise for critical thinking could have even a tiny fraction of its impact, it would be well worth undertaking.

Everybody is familiar with the term “critical thinking,” and has a reasonable working sense of what it is, but there is much disagreement about its proper definition. There’s no need to enter that quagmire here. Suffice to say that critical thinking, for current purposes, is truth-conducive thinking, i.e., thinking that leads to correct or accurate judgements. It is, in a phrase I like to use, the art of being right – or at least, of being more right more often.

But what kind of thinking conduces to truth? What is this subtle art? Back in the early seventeenth century, the philosopher Francis Bacon characterised it this way:

For myself, I found that I was fitted for nothing so well as for the study of Truth; as having a mind nimble and versatile enough to catch the resemblances of things … and at the same time steady enough to fix and distinguish their subtler differences; as being gifted by nature with desire to seek, patience to doubt, fondness to meditate, slowness to assert, readiness to consider, carefulness to dispose and set in order; and as being a man that neither affects what is new nor admires what is old, and that hates every kind of imposture.

Four hundred years later, political scientist Philip Tetlock conducted extensive and rigorous studies of hundreds of experts in the political arena, focusing on their ability to forecast. He found that the experts fell into two main groups:

One group of experts tended to use one analytical tool in many different domains; they preferred keeping their analysis simple and elegant by minimizing “distractions.” These experts zeroed in on only essential information, and they were unusually confident—they were far more likely to say something is “certain” or “impossible.” In explaining their forecasts, they often built up a lot of intellectual momentum in favor of their preferred conclusions. For instance, they were more likely to say “moreover” than “however.”

The other lot used a wide assortment of analytical tools, sought out information from diverse sources, were comfortable with complexity and uncertainty, and were much less sure of themselves—they tended to talk in terms of possibilities and probabilities and were often happy to say “maybe.” In explaining their forecasts, they frequently shifted intellectual gears, sprinkling their speech with transition markers such as “although,” “but,” and “however.”

The second group, the “foxes,” were better forecasters than the first, the “hedgehogs.” Foxy thinking, it seems, is more truth-conducive than hedgehoggery.

Two points jump out from these quotes. First, the two accounts have much in common, underneath the differences in style. The essence of critical thinking is largely stable across the centuries.

Second, they are both describing what good thinkers tend to do. Theorists of critical thinking have various ways of thinking about these tendencies; some talk of dispositions, others of virtues. Here I take what may be a novel approach and consider them as acquirable habits.

A habit is just a propensity to take actions of a certain kind in a relatively automatic or reflexive manner. And as we all know, and as elaborated in the recent book by Charles Duhigg, The Power of Habit, good habits can be cultivated, and bad habits overcome. So the goal here is to list:

  • propensities to do things of certain kinds more or less automatically under appropriate circumstances; which propensities are
  • possessed by highly critical thinkers much more often than by ordinary folk, and which
  • help them to make more correct or accurate judgements, and
  • could be picked up, or further developed, by any ordinary person with a reasonable amount of effort; with the result that
  • they would themselves become more critical.

The habits described below are the kinds of things highly critical thinkers really do do. They are not merely prescriptions or guidelines which would help anyone to be more critical if anyone were disciplined or virtuous enough to follow them.

To illustrate: Blogger Shane Parrish reports that a hedge fund manager and author, Michael Maubousson, asked the Nobel-winning psychologist Daniel Kahneman what a person should do to improve their thinking. “Kahneman replied, almost without hesitation, that you should go down to the local drugstore and buy a very cheap notebook and start keeping track of your decisions.”

Now, it is plausible that keeping track of your decisions in a notebook would improve your thinking. However, it is not a habit of highly critical thinkers, at least in my experience. I don’t recall ever observing a highly critical thinker doing it, or hearing one say they do it. I don’t even do it myself, even after hearing the great Laureate’s advice (and apparently Maubousson doesn’t either).

And so to the habits themselves.

1. Judge judiciously

One of the most salient thinking traps is, in the common phrase, jumping to conclusions. Highly critical thinkers have cultivated four main habits which help them avoid this.

First, they tend to delay forming a judgement until the issue, and the considerations relevant to it, have been adequately explored, and also until any hot emotions have settled (Bacon’s “slowness to assert”).

Second, they tend to abstain altogether from making any judgement, where there are insufficient grounds to decide one way or another. They feel comfortable saying, or thinking, “I don’t know.”

Third, when they do make a judgement, they will treat it as a matter of degree, or assign a level of confidence to it, avoiding treating any non-trivial issue as totally certain.

And fourth, they treat their judgements as provisional, i.e., made on the basis of the evidence and arguments available at the time, and open to revision if and when new considerations arise.

2. Question the questionable

Much more often than ordinary folk, highly critical thinkers question or challenge what is generally accepted or assumed. Sometimes they question the “known knowns” – the claims or positions which constitute widely-appreciated truths. Other times, they target the implicit, the invisible, the unwittingly assumed.

Highly critical thinkers do not of course question everything. They are not “radical skeptics” doubting all propositions (as if this was even possible anywhere other than in philosophical speculation). Rather, they tend to be selective or strategic in their questioning, targeting claims or positions that are worth challenging, whether in some practical or intellectual sense. They are skilled in identifying or “sniffing out” the “questionable,” i.e. claims which are potentially vulnerable, and whose rejection may have important or useful implications.

3. Chase challenges

We all know that feeling of instant irritation or indignation when somebody dares to suggest we might be wrong about something. Highly critical thinkers have cultivated various habits counteracting this reaction – habits which actually lead to them being challenged more often, and benefiting more from those challenges.

For example, while we mostly seek and enjoy the company of those who share our views, highly critical thinkers make an effort to engage those of a contrary opinion, tactfully eliciting their objections. And when fielding such challenges, highly critical thinkers resist the instinct to ignore, reject or rebut. They will be found doing such seemingly perverse things as rephrasing the objections to be sure of understanding them, or even to render them even more powerful. Charlie Munger is quoted as saying “I never allow myself to have an opinion on anything that I don’t know the other side’s argument better than they do.”

Another habit of highly critical thinkers is reading widely, and especially reading from sources likely to present good quality contrary views and arguments. Finding themselves drawn to a position (e.g., that William Shakespeare of Stratford was unlikely to have been the author of the works attributed to him) highly critical thinkers seek out the best presentations of the orthodox position. In short, they strive to test or “prove” their views, rather than support or defend them.

4. Ascertain alternatives

Highly critical thinkers are always mindful that what they see before them may not be all there is. They habitually ask questions like: what other options are there? What have we missed? As opposed to/compared with what? They want to see the full range of relevant alternatives before passing judgement.

For example, when considering a difficult decision, they put extra effort into searching for – or creating – courses of action outside the standard, provided or obvious ranges. When trying to explain why something happened, they will allocated more time than most people do to expanding the range of hypotheses under consideration. In a negotiation, they seek to develop new, mutually acceptable solutions rather than “horse-trading” on existing positions.

5. Make use of methods

When considering a course of action, a critical thinker of my acquaintance, who happened to be successful banker and company director, said she always asked herself two simple questions: (1) what’s the worst thing that could happen here? and (2) what’s the best thing that could happen? The first question prompts us to search for potential drawbacks a bit more thoroughly than we might otherwise have done. The routine amounts to a rudimentary (or “fast and frugal”) risk analysis.

This example illustrates how highly critical thinkers habitually deploy suitable methods to structure their thinking and improve the conclusions. Another example: in psychology department colloquiua I used to attend, participants, after hearing a colleague present their work, would reflexively use a method I call scenario testing. This involves diligently and creatively searching for scenarios in which their colleague’s conclusions are false, even though their premises (data) are true. To the extent that plausible scenarios of this kind can be identified, the inferences from the premises to the conclusions are suspect.

There are literally scores of methods one might use. Some, like the rudimentary risk analysis mentioned above, are simple and informal, and can be quickly learned and exploited by almost anyone. Others are elaborate, technical and may require specialist training (e.g., rigorous argument mapping, or full quantitative risk analysis). Generally, the more sophisticated the method, the less widely it is used, even by the most highly critical thinkers. Every such thinker has built up their own repertoire of methods. What’s most important is not so much their particular selection, but the fact that they habitually deploy a wider range of methods, more often, than ordinary folk.

6. Take various viewpoints

Highly critical thinkers well understand that their view of a situation is unique, partial and biased, no matter how clear, compelling and objective it seems. They understand that there will always be other perspectives, which may reveal important aspects of the situation.

Of course, most people appreciate these points to some degree. The difference is that highly critical thinkers are especially keen to profit from a more complete understanding, and so have cultivated various habits of actually occupying, as best they can, those other viewpoints, so as to see for themselves what additional insights can be gained.

One such habit is trying to “stand in the shoes” of a person with whom we may have some conflict, or are inclined to criticise. Another is to adopt the persona of a person, perhaps a hypothetical person, who strongly disagrees with your views, and to argue against yourself as strongly as they would. A third (relatively rare) is to take the perspective of your future self, having found out that your current position turned out to be wholly, and perhaps disastrously, wrong. What do you see, from the future, that you are missing now?

7. Sideline the self

People tend to be emotionally attached to views. Core beliefs, such as provided by religions or ideologies, help provide identity, and the comforts of clarity and certainty. Sometimes pride binds us to positions; having publicly avowed and defended them previously, it would be humiliating to concede we were wrong. Highly critical thinkers have habits which help to sever these emotional bonds between self and beliefs, allowing the thinker to discard or modify beliefs as indifferently as a used car dealer will trade vehicles. Highly critical thinkers have in other words learned how to sideline the self, removing it from the field of epistemic play.

One habit is to avoid verbally identifying oneself with positions by using distancing locutions. Instead of saying things like “Its obvious to me that…” they will say things like “one plausible position is that”. A similar technique is to give positions names. Instead of boldly asserting that Shakespeare must have written the works, publicly committing yourself to this view, say “According to the Stratfordian view…”.


To be continued…

This post is already much longer than originally intended, but still leaves much unsaid. A few quick final points:

  • The current list can’t claim to be definitive. Others may well come up with different lists.
  • It is also a work in progress. I hope to elaborate each of the major habits in separate posts.
  • Clearly much more could be said about the notion of a habit, and the somewhat paradoxical character of critical thinking habits, which generally involve automatically (“without thinking about it”) engaging in thinking activities.
  • This list is not based on rigorous empirical research, though in places it is informed by such research. There is much scope for scientific clarification here. Tetlock’s studies provide an impressive model.

Comments are most welcome.


Follow

Get every new post delivered to your Inbox.

Join 475 other followers