Sunday, August 14, 2011

Are Your Business Intelligence Systems Complete?

I've been concerned lately about the state of business intelligence (BI) systems I've seen implemented in the wilds of business.

Currently, BI systems are essentially accounting and reporting systems, oftentimes accompanied with very sophisticated data visualization. That is, they show current and past states of collected information. From this information, trends, correlations, and anomalies can be determined and observed, but they don’t say what they mean to stakeholders. Do the trends and anomalies imply that something important is happening and should have some attention applied to them, or are they simply passing stochastic fluctuations? If they do need attention, what should the action be:
  • Correction/mitigation - to bring the system back into control or avoid undesirable outcomes 
  • Exploitation - to take advantage of new opportunities to gain competitive advantage?
How do you decide?

Furthermore, there is, I suspect, another risk that lurks within the use of BI systems related to cognitive biases. There is quite a bit of research that shows that there is an optimum to the amount of information executives need to make well-formed decisions. If there is too little, executives are exposed to unfortunate surprises due to lack of critical information. Action based on recent or colorful available information is called, unsurprisingly, recency and availability bias. It’s easy to help people see why this bias is so pernicious.  Just remind them of events they didn’t anticipate, or show them how being too careful in consideration of extreme events or too prejudicial in consideration of emotionally vivid information is likely costing them too much money. The other extreme is having too much information, and it’s difficult to convince people why this is a problem. Here the executive has so much information that they place higher and higher confidence on the validity of their reasoning, while the research shows that their performance doesn’t improve in a commensurate way. It’s often difficult to explain why this is the case because people typically don’t tie the success or failure to their initiatives to the level of information they were using when they made their decisions, and the time lag between when decision are made and when results are measured often extends beyond the memory of participants as well as the presence of the participants. This latter cognitive failure is related to overconfidence bias.  (For a longer list of cognitive biases, go here.)

The effects of the interactions of BI systems within the presence of the various cognitive biases is
  • A failure to create a shared understanding of how goals and objectives work together to create value, leading to frustrated ambiguity about the real reasons for taking corrective or exploitative action
  • Only the "tangible" costs and benefits are estimated, leaving the fuller range of "intangible" costs and benefits unquantified, treated only in qualitative manner, or disregarded altogether
  • The full range of business uncertainty and risk is often overlooked or not understood, leading first to endless discussions about assumptions and forecasts, and finally to unanticipated outcomes and continual rework or unrealized value
  • Decision prioritization is based on politics rather than a quantified value to the business 
  • Trade-offs between decision timing, optionality, and value are ignored
So, when you do decide take some corrective or exploitative action, how do you know that the actions you take are the most valuable ones and not simply satisficing decisions or, worse, inconsistent and incoherent?

What is missing is an intelligent decision management system that guides decision makers consistently through the thorny issues of what to do in the presence of trends and anomalies reported by their BI systems.  An intelligent decision management system will do at least four things reporting systems coupled with unguided thinking cannot do:
  1. Synthesize seemingly unrelated information 
  2. Abstract information into requisite models that includes the characterizations of appropriate uncertainties controlled for bias
  3. Compare/contrast possible strategies to address trends and anomalies against the stakeholders’ subjective preferences 
  4. Interpret results of the analysis into competitive responses 
It does not throw out the BI systems. It complements them. It provides an executive monitoring and feedback loop into the current state and trajectory of an enterprise.

Saturday, August 13, 2011

Thinking about Thinking

I'm just thinking "out loud" here...about thinking.

Intelligence is a kind of measurement of the quality of problem solving.

Agents are systems that have at least one preferred state. A simple agent possesses one to many preferred states, but they aren’t connected. Complicated agents have at least two preferred states, and the achievement of one increases the likelihood of achieving the other. In fact, one state cannot be achieved unless a predecessor state is achieved. The preferred states are hierarchical in nature. A complex agent is one in which its preferred states are interdependent upon each other, and they operate as a network of self-supporting preferences. These are complex systems in which one of the preferred states feeds back into another.

When an agent is perturbed from its preferred stated or its preferred state changes relative to the one currently occupied, a problem occurs that must be solved. A problem is a deviation from a preferred state. Problem complexity is determined by the number and timing of the coordinated activities that are required to solve the problem; i.e., restore the agent to its desired state. Therefore, an agent is a system with a preferred state(s) that solves problems.

Intelligence is the measure of the ability of an agent to solve problems relative to that of other agents. It has five(?) dimensions to it: speed, cost, soluble limit, elegance, and abstraction.

  1. Speed: For two agents facing the same novel problem of a given complexity, the agent with the greater intelligence solves the problem faster than the other agent.  In this case, intelligence is a function of "clock speed."  More intelligent agents find ways to operate faster.
  2. Cost: For two agents facing the same novel problem of a given complexity, the agent with the greater intelligence solves the problem with fewer tokens of cost.
  3. Soluble limits: For two agents solving novel problems in the same amount of time, the agent with the greater intelligence solves the more complex problem.
  4. Elegance: For two agents facing the same novel problem of a given complexity, the agent with the greater intelligence solves the problem with fewer computational or execution steps.  This is a measure of insight.  The agent sees through the clutter and noise of the problem to the simplest of solutions.  The net result might be faster computation, but not because step-wise operations are performed faster (e.g. due to higher clock speed), but because fewer operations are performed at a given clock speed.
  5. Abstraction: For two agents facing a similar problem again, the agent with the greater intelligence solves the problem faster than the other agent. This may sound like a repetition of the first measure, but it really is a measure of the memory system that permits recall and comparison. Less intelligent agents face more novel problems (from its own perspective) over an equivalent life span than more intelligent agents. The more intelligent agent observes similarities across problems and reuses prior solutions. Given this, the intelligence of an agent at time t can be compared to its own intelligence at t-k. A learning agent, then, is one that improves its intelligence over time because it can recall and abstract problem characteristics to other problems.
  6. (Are there others?)

We shouldn’t think of intelligence as something that necessarily occurs in nervous systems. Intelligence is the quality of any goal seeking system to achieve it’s preferred goals, usually in comparison to similar agents. Thus, a gazelle that is capable of escaping a stalking lion more quickly than an aardvark is more intelligent regardless of the cognitive effort employed. The intelligence may result in the ability to run faster than aardvarks. The intelligence is not a measure of a specific gazelle’s capabilities, but that of the gazelle system that produces gazelles versus the aardvark system that produces aardvarks. Of course, aardvark systems have produced solutions to the problem of stalking lions that gazelle systems have not found.

Here are a few questions I have about human intelligence.

  1. Is there a limit to the kinds of problems humans can solve?
  2. Is there a limit to the kinds of problems any agent can solve?
  3. People commonly referred to as “idiot savants” are those who seem to be able to solve fantastically difficult problems with little effort, but the type of problem solving is of a particularly isolated kind. Would it be possible to isolate the characteristics of cognitive development that allow for this concentrated effort? Then would it be possible to extend that development to a wider range of problem kinds?
  4. The history of the world’s intelligences seems to be characterized by systems of evolutionary genetic organic chemical systems. Gene systems solve environmental problems of survivability for gene populations. Humans seem to represent a peak of genetic problem solving capabilities in the form of complex nervous systems that have the ability now to ask questions about their own capabilities to solve problems. Sophisticated nervous systems solve problems much more efficiently than genetic systems alone. Has the problem solving ability of nervous systems, itself the product of genetic systems, now found the layer of problem solving for a gene population that doesn’t require genetic rules to continue to find solutions to its environmental problems of survivability? In other words, is it possible that human intelligence is now circumventing it’s own genetic evolution, even to the point that genetic evolution will be unnecessary?
  5. What if one problem brought on by self awareness (a genetic solution to another survivability problem) is the awareness of death. The preference for self aware systems might be to avoid death. Would it be possible for self aware systems to solve the problem of terminating self awareness (death) by engineering a mechanism by which awareness exists beyond the current genetically determined neural solution?