Current Projects

The research for my second Ph.D. in Computer Science at Aalborg University is in progress and will cover the following topics:

Improving Business Intelligence Speed and Quality through the OODA Concept
This article introduces the Observation-Orientation-Decision-Action (OODA) concept as a mean to identify three new desired technologies in business intelligence applications that improve the speed and quality in the decision making processes. Specifically, the article identifies: artificial intelligence to reduce human interaction in the OODA loop, “sentinels” that can give early warnings about a later influence on a business critical measure, and finally the ability to analyze the speed and quality of an OODA loop in order to quantify and analyze organizational talent and core competencies. In this project the technologies for sentinels and improvement of OODA effectiveness are pursued further. This article was presented on DOLAP ’07.

[Full-length article from DOLAP 2007]

Discovering Sentinel Rules for Business Intelligence
In this paper, we introduce the concept of sentinel rules. Sentinel rules are schema-level rules that provide the user with an early warning, typically when data concerning the external environment changes. For instance if there is a surge in negative blogging about a company’s products, a sentinel rule can warn that revenue will go down within two months if no course of action is taken. By doing this, we expand the window of opportunity for the organization, and therefore render the organization capable of successful navigation even though the world behaves chaotically. Since sentinel rules are at the schema level as opposed to the data level (such as association rules and sequential patterns), we are able to provide the user with fewer, more general rules, which means that less time is needed to interpret the output. In addition, our solution handles the fuzziness of real-world data by applying a weighted elimination process which eliminates the contradictions that occur in real-world data. The paper presents a method for sentinel rule discovery and an SQL implementation of this based on Microsoft SQL Server 2005. The implementation is assessed in computational complexity and subjected to experimental evaluation. It is verified that the implementation scales linearly on large volumes of data. Furthermore, the implementation is tested on real-world data where it identified useful and relevant rules for decision making.

[Full-length DB Technical Report as published on]

Efficient Discovery of Generalized Sentinel Rules
This article introduces an algorithm for sentinel discovery and uses the simple solution from previous article as a baseline. In contrast to the previously proposed SQL implementation, this algorithm is expected to be superior in performance as well as allow multiple source measures to be combined into one sentinel rule. In addition, the algorithm will also be targeting streaming data sources which means that it will also take into consideration a sliding window for observation and retirement of relevant data. The goal is to produce an algorithm that scales linearly while at the same time outperforms the previous SQL implementation. It is expected that superior performance can be achieved by eliminating some rules that do not apply during the discovery process as opposed to in the end, in addition, this algorithm will take greater advantage of main memory and is thus expected to perform much faster when all data can fit into memory. This is particularly interesting in cases with streaming data where we intend to fit the sliding window to match the size of the main memory in order to optimize performance.

Sentinel Discovery on Dimension Hierarchies
This article introduces a parameterized implementation of the sentinel algorithm that exploits the dimensions to discover sentinels that only appear at some levels of the dimensional hierarchies. The approach will expand the algorithm from the previous article to exploit the hierarchical data to identify clusters where certain sentinel rules apply. Most likely the algorithm will rely on a bottom-up approach where sentinel rules are grown from the lowest granularity of the data, and as we travel higher up in the hierarchy some rules are retired if they do not apply on these levels; effectively we expect to end up with a set of sentinel rules that apply to a cluster bounded by the dimension levels over which no rules are applicable. For example one can think a scenario where the rule: ”IF negative blogs go up THEN revenue goes down within two months AND IF negative blogs go down THEN revenue goes up within two months” applies to the state of California, this rule is then tested and found to apply to all states in the United States, however it is not found to apply to any other countries in the world. In other words, we grew the rule from the lowest level, in this case California, and from there we went to the next hierarchical level, United States, and found the rule to be true as well. At the global level we found the rule not to be true, and thus the largest cluster for which the rule is true is United States. Using this bottom-up approach is expected to give a very fast performing and useful addition to the algorithm for sentinel discovery.

Sentinel Discovery in the Real-World
This article will be a field study on real-world data to assess the feasibility of sentinels. We will test the ability to discover meaningful and useful sentinel rules on a number of real-world datasets, e.g., a dataset with three years of weather observations and a dataset with at least three years of financial, project and support data. If possible, it is also desired to use data such as those from Google Trends to discover sentinel rules that deal with the relationship between number of searches for a company or product and the respective financial figures. We expect to find that the algorithm will find meaningful rules and while doing so give us interesting insights by identifying certain clusters where specific rules apply. Moreover, we expect to find that in many cases the entire discovery process can run in main memory which means that performance on real-world data is highly feasible and within reach for many organizations.

Is Few Clicks a Valid Measure for Effectiveness in Business Intelligence?
This research will investigate the feasibility of few clicks being a valid measure for the effectiveness of the user traveling through the phases of the OODA loop. In practice we will set up a number of tasks that a number of test users will have to do in a number of applications. During these tasks we will observe the number of clicks as well as the success of the tasks, and in addition we will ask the users own opinion of perceived usability. Using these data we will investigate if any correlations exist between number of clicks, number of successes/errors and perceived usability. We will select a number of tasks and applications that are as closely related to the field of Business Intelligence as possible. We will also conduct research in the real-world by observing the number of clicks of users on a real business intelligence application to assess few clicks as a meaningful measure. In this context we will look into the number of clicks, the time between clicks as well as the complexity of the information presented to the user, where complexity is a measure based on the number of cells, dimensions and measures that are returned to the user. Through these experiments we hope to gain important insight into how we can better design business intelligence applications that move the user through the OODA loop faster, and we expect to gain the most insight from the discrete observation of users since this is not biased as opposed to the users’ opinions gathered.


In summary the scientific method applied in this project is a blend of analytical, constructive and experimental approaches. During the four articles about discovering sentinels (section 3.2) a series of prototypes will be developed. These prototypes will be validated on synthetic data to assess functional correctness and on real-world data to assess them for usefulness. In addition, both types of validation will assess the usefulness of the algorithms from a performance point of view. In the final article about few clicks as a measure for usability (section 3.3) data from real-world usage of business intelligence will be collected and analyzed in order to test the hypothesis that few clicks is indeed a meaningful measure to assess usefulness of a business intelligence application.

We expect that the research and the six articles compiled during this project will be a valuable contribution of concrete solutions to problems that organizations face in global competition. In addition, we hope to provide a good research foundation that others can use in their research going forward.

[My Computer Science Ph.D. Thesis]

[Short Article About this Ph.D. Project]

[See Complete Project Plan]

Other Topics of interest...

Currently, I am playing with three projects that have all been inspired by the CALM book:

Autonomous OODA Cycles in Stock Trading
This project is an AI classic! Nevertheless, it has been my passion since the age of 15 to develop a system that is able to trade stocks with an optimal profit within a span of calculated risk. This project is not driven by a desire to make a lot of money; it is driven by the fact that this market is the most perfect in terms being almost as visible for computers as it is for humans. Therefore this is one of the first fields where the application of autonomous OODA cycles seems reasonable.

To some this project might be considered alchemy, but my findings so far seem promising. What seems to be working is essentially a decision model based on technical analysis across larger periods and period segments. In other words simple models based on large amounts of data.

Time will show the effectiveness of this project: Genmab is a benchmark and my first investment that was partially based on this theory; I bought at 133 on 28th December 2005.

Body Computing
In this project we are a team working on in-flight computing during skydiving. The vision is to extend the human abilities in skydiving by integrating a computer with the body of the skydiver. The computer will be able to improve free-flying skills allowing the skydiver to monitor the performance on video as well as tracking vertical and horizontal movement. At a later stage we should be able to work with the entire body position and posture in the air.

Another objective on this type of body computing is to improve skydiving safety by allowing the skydiver to get warnings in a variety of situations that are unavailable with contemporary equipment today.

Overall this project has a broader perspective since the idea to extend human potential in a given situation by the use of computers can of course be harvested in a number of situations other than skydiving. The idea is to create something that generates real-time, intelligent and relevant information in an extreme -or any- situation. The knowledge generated in this project can of course also be fed back into the traditional discipline of Business Intelligence.

The Battle Droid
My dream with this project is to create a supercomputer that is able to battle with a human. I believe that a lot can still be learned from freestyle battle rap as a metaphor for modern strategizing. Such a system would need to be able to compose sentences that rhyme, address the weaknesses of the opponent, and defend itself from the opponent’s attacks. This system would need to be able to understand attacks in words and counter them intelligently, so building this computer would be a more sophisticated version of chess computing. One could say that suddenly the outcomes would be limited only by an entire language rather than by the permissible moves on a chessboard. Once the system is ready for testing, I will have it battle great rappers, like Deep Blue played world chess champion Gary Gasparov.

In computing nature, the system I design will perhaps not beat the opponent at first, but with every loss it will gain in strength. By pure computing advancement, it will be a matter of time before it can beat a skilled opponent, and I am confident that valuable lessons in both artificial intelligence and speech synthesis from this project could then be used in other areas to assist in computerizing autonomous OODA cycles.


As my work and experiments proceed, more information will be posted on this website, so please stay tuned. Also, please feel free to contact me for sharing ideas and suggestions with regards to the articles and projects.

Updated 2009-03-10

[Home] - [CALM] - [Blog] - [Work] - [Projects] - [Profile] - [References] - [Links]

Search and Site Map are in English language only

The official homepage of Morten Middelfart and CALM. Computer Aided Leadership and Management (CALM) is an inspirational speculation on where mankind may be heading in the quest to leverage computer potentials for helping individuals and organizations to self-actualize their symbiotic potentials.

The time frame for this well-informed and provocative speculation on relatively near-term and more distant potentials is clearly within mankind's grasp. Dr. Middelfart argues persuasively that within the next one or two decades, symbiotic links with "intelligent machines" will surely leverage people's potentials, far beyond all human progress to date!

Altogether, a tour de force of well-informed contemporary insights and maturely reasoned speculation; affording possible stepping stones and a creative springboard for what may lie ahead. As has been said: "Man's reach should exceed his grasp; else what's a heaven for?”


morten middelfart, morten, middelfart, morton, doctor, dr, ph.d., author, book, calm, computer, aided, leadership, management, human, potential, energy, decision, fear, strength, weakness, business, intelligence, artificial, decision, individual, organization, computers, leaders, managers, future, targit, targit a/s, intelligent dashboard, intelligent analysis, intelligent reporting, balanced scorecard, business scorecard, data warehouse, data mart