Feeds:
Posts
Comments

Archive for the ‘Datacentrism’ Category

Note:  this post first appeared on another blog back in 2008.  I’m reposting it here now because this blog is its natural home and because its main points appear to need frequent reiterating… 

________________________________

I’ve long had a suspicion that, just as knowledge management generally isn’t concerned with knowledge, so “business intelligence” is not really concerned with intelligence. Rather, in both cases, they’re primarily concerned with the management of information; the hard “knowledge” or “intelligence” part is usually left for the user. This is not to deny that knowledge management and business intelligence are useful; of course they are, in their own ways. It is just that there is a gap between their somewhat grand self-designations and the somewhat more mundane reality of what they do.

At BI Questions Blog, there is a video of Timo Elliott giving an interesting overview of business intelligence (BI) and what contemporary BI suites such as Business Objects can deliver in this area. It is quite an eye-opener in many ways, and well worth watching.

It gives a nice opportunity to elaborate the point about BI skipping over the “I” or intelligence part.

Here is TE describing what BI is fundamentally about:

So what do we mean by business intelligence success? Well at a very high level it means getting four things right:

  1. First, you need to be able to tame information chaos. You’ve got lots of different information in lots of different systems, structured data, unstructured data, documents, emails, there’s the web out there; so bringing all that information into one coherent structure so you can start doing something with it.
  2. Of course having information alone is useless; you need to turn that into insight. You need tools that actually let you look around and drill into the data and report it out to all of the people in the information who need it.
  3. The third thing you need to do is turn that insight into action. Looking at a report is useless. Something has to change in the business in order to get any value. Information has to be actionable. So to help you turn insight into action we provide a suite of business applications, in particular for the office of finance, to help you with financial planning, budgeting, consolidation, profitability and cost analysis – a whole suite of tools that really let you start doing things with the information.
  4. Now so far we could get through all of those stages and have optimum performance, we could have fantastic figures; but unfortunately we might get fantastic figures the way, say, Enron did. So we need one more block, which is governance, risk, compliance…

The accompanying visual is this:

But there’s something missing from this picture. In non-trivial or non-routine cases, you can’t (or shouldn’t) skip directly from insight to action. Insight, in TE’s description of it, appears to be a richer, more synthesized, more accessible form of information; it is what you’ve got when you’ve used their tools to “look around and drill into the data and report it out.” Between insight, in this sense, and action there have to be processes of assessing, deliberating, integrating, weighing, and choosing – in short, there has to be decision.

Decision making is the crucial bridge between information (even quality information, i.e. insight) and action.

To illustrate with an example drawn from a later part of TE’s presentation: using a nifty Business Objects component, he shows a chart of Disney corporate revenues over a few years. He then puts on the same chart a line showing the performance of the US economy over the same period; and he then shows the difference between the two:

Impressive! If you look closely, it seems that Disney does poorly in Q1 each year. It certainly does seem that we’ve instantly got some greater insight.

But what follows from this insight? What should Disney do? Close down in Q1 each year? Increase advertising in Q1 – or Q4? Lower prices? Hire more staff? Fire more staff? Nothing?

Before any action, you’d have to decide which action was most appropriate in the circumstances. The insight we obtained (and no doubt numerous others we could get from our wonderful BI suite) would surely help. But insights, no matter how penetrating or how numerous, don’t dictate any particular decision. The decision is generally made through a deliberative, usually collaborative process in which insights are translated into arguments and arguments are assessed and weighed.

This deliberative decision process, so pervasive in business, that is missing not only from the TE box graphic above but, it seems, from the whole BI mindset.

I’m reminded of the famous cartoon of two scientists:

Just as there are complex formula either side of the crucial “step two” on the blackboard, so business intelligence suites, it seems, provide technical power either side of the “miracle” of human deliberative decision.

What business intelligence suites (and knowledge management systems) seem to lack is any way to make the thinking behind core decision processes – the “step 2″ in moving from information to action – more explicit.

_____________________________

Timo Elliott responded:

Thanks for watching the presentation! I completely agree that BI needs more support for collaboration and the way people really make decisions. The only thing I’d argue with is where to put it — rather than being “between” insight and action, I think it’s essential all the way along the spectrum… You need to collaborate/decide what information is relevant, what insights are the correct ones, what actions are most appropriate, what controls to put in place, etc… Regards, Timo

Read Full Post »

A common decision making trap is thinking more data = better decision – and so, to make a better decision, you should go out and get more data.  

Let’s call this the datacentric fallacy.  

Of course there are times when you don’t have enough information, when having more information (of the right kind) would improve the decision, and when having some key piece of information would make all the difference.  

Victims of datacentrism however reflexively embark on an obsessive search for ever more information.  They amass mountains of material in hope that they’ll stumble across some critical piece, or critical mass, that will suddenly make clear what the right choice is.  But they are usually chasing a mirage.  

In their addiction to information, what they’re neglecting is the thinking that makes use of all the information they’re gathering.  

As a general rule, quality of thinking is more important than quantity of data.  Which means that you’ll usually be better rewarded by putting any time or energy you have available for decision making into quality control of your thinking rather than searching for more/better/different information.

Richards Heuer made this point in his classic Psychology of Intelligence Analysis.  Indeed he has a chapter on it, called Do You Really Need More Information? (Answer – often, no.  In fact it may hurt you.) 

A similar theme plays out strongly in Phil Rosenzweig’s The Halo Effect… and the Eight Other Business Delusions That Deceive Managers. Rosenzweig provides a scathing critique of business “classics” such as In Pursuit of Excellence, Good to Great and Built to Last, which purport to tell you the magic ingredients for success.  

He points out how in such books  the authors devote much time and effort to boasting about the enormous amount of research they’ve done, and the vast quantities of data they’ve utilised, as if the sheer weight of this information will somehow put their conclusions beyond question.  

Rosenzweig points out that it doesn’t matter how much data you’ve got if you think about it the wrong way.  And think about it the wrong way they did, all being victims of the “halo effect” (among other problems).  In these cases, they failed to realise that the information they were gathering so diligently had been irretrievably corrupted even before they got to it.  

Another place you can find datacentrism  running rampant is in the BI or “business intelligence” industry.  These are the folks who sell software systems for organising, finding, massaging and displaying data in support of business decision making.   BI people tend to think decisions fall automatically out of data, and so presenting more and more data in ever prettier ways is the path to better decision making.

Stephen Few, in his excellent blog Visual Business Intelligence, has made a number of posts taking the industry to task for this obsession with data at the expense of insightful analysis.  

The latest incidence of datacentrism to come my way is courtesy of the Harvard Business Review.  I’ve been perusing this august journal in pursuit of the received wisdom about decision making in the business world.   In a recent post, I complained that the 2006 HBR article How Do Well-Run Boards Make Decisions? told us nothing very useful about how well-run boards make decisions.  

I was hoping to be more impressed by the 2006 article The Seasoned Executive’s Decision Making Style.  The basic story here is that decision making styles change as you go up the corporate ladder, and if you want to continue climbing that ladder you’d better make sure your style evolves in the right way.  (Hint: become more “flexible.”) 

In a sidebar, the authors make a datacentric dash to establish the irrefutablity of their conclusions:

For this study, we tapped Korn/Ferry International’s database of detailed information on  more than 200,000 predominantly North American executives, managers, and business professionals in a huge array of industries and in companies ranging from the Fortune 100 to startups. We examined educational backgrounds, career histories, and income, as well as standardized behavioral assessment profiles for each individual. We whittled the database down to just over 120,000 individuals currently employed in one of five levels of management from entry level to the top.  We then looked at the profiles of people at those five levels of management. This put us in an excellent position to draw conclusions about the behavioral qualities needed for success at each level and to see how those qualities change from one management level to another.

120,000.  Wow. 

They continue:

These patterns are not flukes. When we computed standard analyses of variance to determine whether these differences occurred by chance, the computer spit out nothing but zeroes, even when the probability numbers were worked out to ten decimal points.  That means that the probability of the patterns occurring by chance is less than one in 10 billion. Our conclusion: The observed patterns come as close to statistical fact (as opposed to inference) as we have ever seen.

This seems too good to be true.   Maybe their thinking is going a bit off track here?  

I ran the passage past a psychologist colleague who happens to be a world leader in statistical reform in the social sciences, Professor Geoff Cumming of Latrobe University.   I asked for his “statistician’s horse sense” concerning these impressive claims.  He replied [quoted here with permission]:

P-value purple prose! I love it!

Several aspects to consider. As you know, a p value is Prob(the observed result, or one even more extreme, will occur|there is no true effect). In other words, the conditional prob of our result (or more extreme), assuming the null hypoth is true.

It’s one of the commonest errors (often made, shamefully, in stats textbooks) to equate that conditional prob with the prob that the effect ‘is due to chance’. The ‘inverse probability fallacy’. The second last sentence is a flamboyant statement of that fallacy. (Because it does not state the essential assumption ‘if the null is true’.)

An extremely low p value, as the purple prose is claiming, often in practice (with the typical small samples used in most research) accompanies a result that is large and, maybe, important. But it no way guarantees it. A tiny, trivial effect can give a tiny p value if our sample is large enough. A ‘sample’ of 120,000 is so large that even the very tiniest real effect will give a tiny p. With such large datasets it’s crazy even to think of calculating a p value. Any difference in the descriptive statistics will be massively statistically significant. (‘statistical fact’)

Whether such differences are large, or important, are two totally different issues, and p values can’t say anything about that. They are matters for informed judgment, not the statistician. Stating, and interpreting, any differences is way more important than p-p-purple prose! 

So their interpretation of their data – at least, its statistical reliability – amounts to a “flamboyant statement” of “one of the commonest errors.” Indeed according to Geoff it was “crazy to even think of” treating their data this way.  

The bulk of their article talks about the kinds of patterns they found, and maybe their main conclusions hold up despite the mauling of the statistics.  Maybe.   Actually I suspect their inferences have even more serious problems than committing the inverse probability fallacy – but that’s a topic for another time.  

In sum, beyond a certain point, the sheer volume of your data or information matters much less than thinking about it soundly and insightfully.  Datacentrism, illustrated here, is a kind of intellectual illness which privileges information gathering – which is generally relatively easy to do – over thinking, which is often much harder.

Read Full Post »