Sunday, August 14, 2011

Are Your Business Intelligence Systems Complete?

I've been concerned lately about the state of business intelligence (BI) systems I've seen implemented in the wilds of business.

Currently, BI systems are essentially accounting and reporting systems, oftentimes accompanied with very sophisticated data visualization. That is, they show current and past states of collected information. From this information, trends, correlations, and anomalies can be determined and observed, but they don’t say what they mean to stakeholders. Do the trends and anomalies imply that something important is happening and should have some attention applied to them, or are they simply passing stochastic fluctuations? If they do need attention, what should the action be:
  • Correction/mitigation - to bring the system back into control or avoid undesirable outcomes 
  • Exploitation - to take advantage of new opportunities to gain competitive advantage?
How do you decide?

Furthermore, there is, I suspect, another risk that lurks within the use of BI systems related to cognitive biases. There is quite a bit of research that shows that there is an optimum to the amount of information executives need to make well-formed decisions. If there is too little, executives are exposed to unfortunate surprises due to lack of critical information. Action based on recent or colorful available information is called, unsurprisingly, recency and availability bias. It’s easy to help people see why this bias is so pernicious.  Just remind them of events they didn’t anticipate, or show them how being too careful in consideration of extreme events or too prejudicial in consideration of emotionally vivid information is likely costing them too much money. The other extreme is having too much information, and it’s difficult to convince people why this is a problem. Here the executive has so much information that they place higher and higher confidence on the validity of their reasoning, while the research shows that their performance doesn’t improve in a commensurate way. It’s often difficult to explain why this is the case because people typically don’t tie the success or failure to their initiatives to the level of information they were using when they made their decisions, and the time lag between when decision are made and when results are measured often extends beyond the memory of participants as well as the presence of the participants. This latter cognitive failure is related to overconfidence bias.  (For a longer list of cognitive biases, go here.)

The effects of the interactions of BI systems within the presence of the various cognitive biases is
  • A failure to create a shared understanding of how goals and objectives work together to create value, leading to frustrated ambiguity about the real reasons for taking corrective or exploitative action
  • Only the "tangible" costs and benefits are estimated, leaving the fuller range of "intangible" costs and benefits unquantified, treated only in qualitative manner, or disregarded altogether
  • The full range of business uncertainty and risk is often overlooked or not understood, leading first to endless discussions about assumptions and forecasts, and finally to unanticipated outcomes and continual rework or unrealized value
  • Decision prioritization is based on politics rather than a quantified value to the business 
  • Trade-offs between decision timing, optionality, and value are ignored
So, when you do decide take some corrective or exploitative action, how do you know that the actions you take are the most valuable ones and not simply satisficing decisions or, worse, inconsistent and incoherent?

What is missing is an intelligent decision management system that guides decision makers consistently through the thorny issues of what to do in the presence of trends and anomalies reported by their BI systems.  An intelligent decision management system will do at least four things reporting systems coupled with unguided thinking cannot do:
  1. Synthesize seemingly unrelated information 
  2. Abstract information into requisite models that includes the characterizations of appropriate uncertainties controlled for bias
  3. Compare/contrast possible strategies to address trends and anomalies against the stakeholders’ subjective preferences 
  4. Interpret results of the analysis into competitive responses 
It does not throw out the BI systems. It complements them. It provides an executive monitoring and feedback loop into the current state and trajectory of an enterprise.

Saturday, August 13, 2011

Thinking about Thinking

I'm just thinking "out loud" here...about thinking.

Intelligence is a kind of measurement of the quality of problem solving.

Agents are systems that have at least one preferred state. A simple agent possesses one to many preferred states, but they aren’t connected. Complicated agents have at least two preferred states, and the achievement of one increases the likelihood of achieving the other. In fact, one state cannot be achieved unless a predecessor state is achieved. The preferred states are hierarchical in nature. A complex agent is one in which its preferred states are interdependent upon each other, and they operate as a network of self-supporting preferences. These are complex systems in which one of the preferred states feeds back into another.

When an agent is perturbed from its preferred stated or its preferred state changes relative to the one currently occupied, a problem occurs that must be solved. A problem is a deviation from a preferred state. Problem complexity is determined by the number and timing of the coordinated activities that are required to solve the problem; i.e., restore the agent to its desired state. Therefore, an agent is a system with a preferred state(s) that solves problems.

Intelligence is the measure of the ability of an agent to solve problems relative to that of other agents. It has five(?) dimensions to it: speed, cost, soluble limit, elegance, and abstraction.

  1. Speed: For two agents facing the same novel problem of a given complexity, the agent with the greater intelligence solves the problem faster than the other agent.  In this case, intelligence is a function of "clock speed."  More intelligent agents find ways to operate faster.
  2. Cost: For two agents facing the same novel problem of a given complexity, the agent with the greater intelligence solves the problem with fewer tokens of cost.
  3. Soluble limits: For two agents solving novel problems in the same amount of time, the agent with the greater intelligence solves the more complex problem.
  4. Elegance: For two agents facing the same novel problem of a given complexity, the agent with the greater intelligence solves the problem with fewer computational or execution steps.  This is a measure of insight.  The agent sees through the clutter and noise of the problem to the simplest of solutions.  The net result might be faster computation, but not because step-wise operations are performed faster (e.g. due to higher clock speed), but because fewer operations are performed at a given clock speed.
  5. Abstraction: For two agents facing a similar problem again, the agent with the greater intelligence solves the problem faster than the other agent. This may sound like a repetition of the first measure, but it really is a measure of the memory system that permits recall and comparison. Less intelligent agents face more novel problems (from its own perspective) over an equivalent life span than more intelligent agents. The more intelligent agent observes similarities across problems and reuses prior solutions. Given this, the intelligence of an agent at time t can be compared to its own intelligence at t-k. A learning agent, then, is one that improves its intelligence over time because it can recall and abstract problem characteristics to other problems.
  6. (Are there others?)

We shouldn’t think of intelligence as something that necessarily occurs in nervous systems. Intelligence is the quality of any goal seeking system to achieve it’s preferred goals, usually in comparison to similar agents. Thus, a gazelle that is capable of escaping a stalking lion more quickly than an aardvark is more intelligent regardless of the cognitive effort employed. The intelligence may result in the ability to run faster than aardvarks. The intelligence is not a measure of a specific gazelle’s capabilities, but that of the gazelle system that produces gazelles versus the aardvark system that produces aardvarks. Of course, aardvark systems have produced solutions to the problem of stalking lions that gazelle systems have not found.

Here are a few questions I have about human intelligence.

  1. Is there a limit to the kinds of problems humans can solve?
  2. Is there a limit to the kinds of problems any agent can solve?
  3. People commonly referred to as “idiot savants” are those who seem to be able to solve fantastically difficult problems with little effort, but the type of problem solving is of a particularly isolated kind. Would it be possible to isolate the characteristics of cognitive development that allow for this concentrated effort? Then would it be possible to extend that development to a wider range of problem kinds?
  4. The history of the world’s intelligences seems to be characterized by systems of evolutionary genetic organic chemical systems. Gene systems solve environmental problems of survivability for gene populations. Humans seem to represent a peak of genetic problem solving capabilities in the form of complex nervous systems that have the ability now to ask questions about their own capabilities to solve problems. Sophisticated nervous systems solve problems much more efficiently than genetic systems alone. Has the problem solving ability of nervous systems, itself the product of genetic systems, now found the layer of problem solving for a gene population that doesn’t require genetic rules to continue to find solutions to its environmental problems of survivability? In other words, is it possible that human intelligence is now circumventing it’s own genetic evolution, even to the point that genetic evolution will be unnecessary?
  5. What if one problem brought on by self awareness (a genetic solution to another survivability problem) is the awareness of death. The preference for self aware systems might be to avoid death. Would it be possible for self aware systems to solve the problem of terminating self awareness (death) by engineering a mechanism by which awareness exists beyond the current genetically determined neural solution?

Thursday, July 07, 2011

A Hard Pill to Swallow

Consider this:
We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable rights, that among these are life, liberty and the pursuit of happiness. That to secure these rights, governments are instituted among men, deriving their just powers from the consent of the governed.

Three days ago I reaffirmed my belief in the historically radical idea that people have an innate, unalienable right to life, liberty, and the pursuit of happiness. I ate some barbecue and ignited some small scale incendiary devices with good friends to commemorate this ideal. These banal means of celebration are actually more profound than I think we usually give thought to. They represent the culmination of society's progress. I mean that. Stay with me here.

If we are going to put an end to a person's life, we had better have airtight knowledge that the person committed an act of such unconscionable magnitude that society itself cannot tolerate that person's continued existence; i.e., that society's effective operation and possible existence is jeopardized by that person's continued existence. That's a tall criteria to satisfy.

Particularly, what I'm driving at is this: the death of Caylee Anthony is a tragedy. There is no doubt in my mind that the person responsible for her death or responsible for her wellbeing at the time of her death has evaded justice. That is a hard pill to swallow for a society that loves justice. But if we as a society, via the jury in this case, had convicted her mother, Casey Anthony, and sentenced her to death while a plausible doubt existed about her actual guilt, much less her actual involvement in the death, we would double the harm done to our society in this affair. That would be irrational, which means that it would be arbitrary. And if arbitrariness rules the day, then society actually poses a threat to itself. This is not just high minded idealism. It literally means no more barbecue in the comfort of sedate neighborhoods and shady cul-de-sacs. Really. The barbarians would gather their hordes again. I don't know how long it would take, but it would eventually happen. The arc of cultural progress over the last 2,500 years has converged on bringing at bay the irrational judgements of the hordes and the superstitious with judicial progress, a situation I suspect that we have mostly forgotten. Would we now jeopardize that progress to satisfy a need for emotional closure, even a justifiable one? Of course, it's more than barbecue and fireworks. Those are just symbols for something more important. They are a type of communion that commemorates a greater ideal. Not only would we now find ourselves threatening the life of another person, we would be threatening the lives of all of us, dishonoring the unalienable rights we hold dear.

Life is full of ambiguous situations, and we often make decisions facing a great deal of uncertainty and ambiguity. We usually try to do the best we can given the information we possess at the moment of decision, and then we make corrections along the way. We even anticipate that corrections will be needed because we know that life is subject to ever evolving influences. In some cases, though, decisions produce totally irreversible consequences* - such as executing a person convicted of a crime. Yet we want and need justice to be served? How do we proceed?

Suppose you face making a decision that impacts someone else directly. Ask yourself: would you be willing to submit to the same information conditions and outcomes the recipient of your decision faces? Now, suppose you had a gun that could deliver that outcome to you. The gun is omniscient. It knows the absolute truth of the case in question. In the moment of decision, you pull the trigger. If your judgment is coherent with the knowledge of the gun, you walk away unharmed. But if you are wrong, the gun delivers the same consequence that you would dish out. If you believe your judgement is certainly correct, then pulling the trigger is no problem. But if there is any doubt in your mind that you are correct in your judgment, and, more importantly, that you cannot tolerate the outcome, you might delay pulling the trigger or just defer altogether. When doubt exists, when the stakes are irreversible, deferring our sense of satisfying justice may deliver a higher degree of actual justice.

I don't know whether Casey Anthony killed her daughter or not. While I may harbor suspicions, that's all I have. My suspicions have been wrong in the past, and sometimes with embarrassing consequences. Fortunately for me, the stakes associated with my judgements based on poorly formed suspicions have usually been relegated to some loss of face, property, or relationships. In this case, a jury held the LIFE of another person in their consideration. Given the emotional content of the case and the ambiguous evidence brought to trial, as best as I can tell, the jury behaved in a circumspect manner that is commensurate with a society that values reason and maximizing justice. The jury behaved in a manner that recognized that pulling the trigger on the truth gun possibly led to an intolerable outcome - another (potentially) innocent person losing her life.

Unserved justice is a hard pill to swallow. But mis-served justice is a bigger travesty. A rational justice system operates on this idea.

*Actually, all decisions involve irreversibility because all actual decisions occur in time. While a perturbed bearing or position might be eventually restored, the time required to regain it cannot be.

Tuesday, March 08, 2011

"Tactics Are the New Strategy"? Only if you want to be whipsawed

Inc. Online published an article on Feb 24, 2011 entitled "Tactics Are the New Strategy" and opened the article with the following: "Strategic management can be a huge time drain for managers. Why not just ditch the conventional wisdom and go with your intuition in order to innovate once in a while?"

It sounds fun and certainly less constraining than the pathology the article describes, but we take some issue with some key points in the article.

Unfortunately, many managers who "devote an excessive amount of time developing, researching, and validating their strategy" may have fallen victim to a pathology commonly referred to as "analysis paralysis". This pattern of behavior seems to be based on a belief that every detail of execution can be planned beforehand, forecast precisely and executed without hindrance. This micromanagement approach may be called strategic planning, but it fails to be effective as real strategic thinking. Companies that commit themselves to this approach frequently destroy value by missing opportunity windows or simply wearing out people's patience.

However, the other end of the spectrum from "analysis paralysis" is really just a series of ad hoc tactics that make sense in the moment. Without strategic alignment, these actions can whipsaw an organization into a kind of swirling motion that results in no real progress. Wins, when they occur, may be more a product of luck than strategic design and are rarely sustainable. Companies that follow this approach to management often destroy value because they don't anticipate important events that require mitigation or off-ramps from their current course of action. And probably worse, they don't have the framework to construct the type of creative hybrid strategy that develops from alignment of the combined wisdom and various internal viewpoints. The players with these viewpoints may remain adversarial to each other or even in open conflict if they are independently carrying out their own ad hoc tactical approaches.

A third approach that provides managers amazing flexibility is outlined in our presentation titled "Quantifying - Not Assuming - Making Sense of Intangibles, Uncertainties and Risks." Our approach leads to effective strategy alignment and decisions in days and weeks compared to the months or years of the "analysis paralysis" type of strategic planning. It also avoids the ad hoc tactical approach of "flying by the seat of your pants" with its related risks. The net effect is that our approach actually accelerates value creation because managers quickly find a strategic theme to align informed and efficient tactical execution while avoiding the rework that inevitably arises from a "ready, fire, aim" approach that lacks clear strategic alignment.

Monday, January 24, 2011

Why Almost Everything You Hear About Medicine Is Wrong

This is the second article I've read recently about Dr. John P.A. Ioannidis. The implications are staggering.

The Atlantic published an earlier article about Ioannidis: "Lies, Damned Lies, and Medical Science".

Now, if medical research can go so wrong, how likely is it that business research suffers from many of the same failures?

Tuesday, January 18, 2011

"The Heroes of Freedom" by Lawrence Reed

The Foundation for Economic Freedom (FEE) has been linked on this blog for several years now. It provides a great set of resources for students of economic thought and liberty.

I just found out today that FEE has opened an office in Atlanta, GA, and that President Lawrence Reed will be hosting a talk in my hometown of Newnan, GA on February 15. You can obtain more details here.

I am really looking forward to this.

Wednesday, January 12, 2011

The Tau Manifesto

Or Tauism - seeking the way of tau
by Michael Hart

Just for fun on a cold winter's afternoon.

Tuesday, January 11, 2011

Have you tested your strategy lately?

McKinsey offers some useful tests to determine the quality of a corporate strategy. But do you know HOW to create a strategy that passes the tests?

Here are four of the ten tests. They aren't necessarily the most important, but the issues they address crop up in EVERY client I serve. My suspicion, then, is that those same issues crop up in every organization.

Test 5: Does your strategy rest on privileged insights?
Test 6: Does your strategy embrace uncertainty?
Test 7: Does your strategy balance commitment and flexibility?
Test 8: Is your strategy contaminated by bias?

So, I ask...

  • How do you gain insights where others don't?
  • How do you handle uncertainty in the decision making process of strategic planning?
  • Do you quantify the effects of uncertainty? Do you know how to?
  • Do you know how much uncertainty leads to potential risk?
  • How do you make tradeoffs in the decision to gather better information and the cost to do so?
  • How do you make tradeoffs in the decision to gain more control over uncertain events and the cost to do so?
  • Do you attempt to optimize strategies (exposing them to fragility) or make them robust to a wide range of events?
  • How do you control for the effects of bias and avoid committing to favorite strategies versus better ones?