- Get Involved
- Understanding CCS
- About the Institute
Decision Quality – Lessons and Guidelines for CCS Projects
In my previous post I said that projects are entirely about making decisions. Therefore it should follow that good projects come about from good decisions and bad projects are the result of bad decisions. But what is a good decision and what constitutes a bad decision? I also said that most decisions are made by the application of judgement rather than process. It is easy to determine the quality of a decision retrospectively but how can we evaluate the quality of a decision at the time it is made, especially when the decision does not follow an auditable process? In this blog I will attempt to explore the concept of decision quality.
After diligently doing our research on a list of stocks, we decided to buy one with the objective of making money but it went down. Would we conclude that we made a bad decision? At the same time a professional investor looked at the same list of stocks and picked a winner. We would surely be tempted to draw the conclusion that the professional was a better decision maker than ourselves even though we believed that we followed a good process. If on the other hand our professional had failed, we would probably conclude that either the professional was not very good or that the process he used was probably flawed. Since the process he used was essentially his knowledge and experience, either it or the way he applied it must have been flawed.
Decisions are made by people. Decision making is a human process. People act and react in accordance with their upbringing, education, experiences, perceptions, values, motivations, and aspirations and biases. Decision quality is therefore subjective and prone to the characteristics, attributes, vagaries and above all biases of the decision maker. From my involvement in delivering projects over 30 years, I have found that biases have a profound effect on decision quality. Therefore understanding biases may help us in making better decisions and/or better evaluate decisions made.
Psychologists group biases into one of three types depending upon the source of the bias.
- Belief biases stem from ones values and beliefs, for example we have a tendency to avoid options which we believe cannot be evaluated due to lack of information and we also have the tendency to place more importance or attention on recent events or information and either ignore or forget more distant information. We tend to follow the lead of the last person who influenced us.
- Attribution biases stem from how we view ourselves and others. These are often called social biases, for example we tend to attribute our success to our abilities and talents, and our failures to bad luck and other factors which we think are beyond our control. Similarly we are more likely to make an internal attribution to an entire group instead of the individuals within the group. We also believe ourselves to be worse than others at tasks which are difficult. We tend to believe that others are more knowledgeable and we tend to place more credence on so called out-of-town experts.
- Cognitive biases stem from our memories or use of our memory, for example memory distortions introduced by the loss of details over time and those that remain tend to be exaggerated in relation to the original experience. The story gets shorter and the fish gets bigger each time the story gets told. Also, items near the end of a list are the easiest to recall, followed by the items at the beginning of a list. Items in the middle are the least likely to be remembered.
Many attempts have been made to remove this subjective nature of decision making i.e. to remove the bias and normalise the results. One such attempt is reference class forecasting. A form of benchmarking or use of lessons learnt from similar projects developed by Daniel Kahneman and Amos Tversky. Reference class forecasting predicts the outcome of a planned action based on actual outcomes in a reference class of similar actions to that being forecast. In my earlier example of selecting a winning stock it would use the outcomes of previous similar decisions to improve the quality of the current decision.
Kahneman and Tversky found that human judgement is generally optimistic due to overconfidence and insufficient consideration of variance in information and likelihood of outcomes. Therefore, people tend to underestimate the costs, completion times, and risks of planned actions, whereas they tend to overestimate the benefits of those same actions. Kahneman and Tversky claimed that such errors are caused by taking an "inside or subjective view", rather than taking an "outside or objective view".
The approach relies on the availability of detailed information on both the reference class and the specific project to enable a valid comparison and hence forecast to be made. This is often not the case especially in early stages of project development. In addition, there is often optimism (bias) that the extremes will not happen to our project because we will put preventative actions and/or mitigations in place. Unfortunately the effectiveness and/or costs of these actions are often also overly optimistic or are not followed for a variety of reasons, some of which are exemplified in the biases that I have described above.
Clearly when there is no reference class or where the reference class is made up of the outcomes of studies rather than actual projects, as is the current situation for CCS projects, reference class forecasting is unhelpful. Studies are mere forecasts which make predictions based on these forecasts, and are prone to suffering from the same biases that may have gone into the so called reference class in the first place. In fact, a recent experience of mine not only exemplified this view but lead to the dismissal of the findings of that study as being incorrect as they did not align with the alleged reference class which was made up largely of similar studies.
Similar problems can occur even where more established technologies are involved and where labour market, cost and cultural conditions are not directly comparable. For example, US Gulf Coast costs and timelines are often quoted as the benchmark in oil and gas and sometimes power plant projects. Again I have had personal experience where, after making what were believed to be valid adjustments, comparison with similar USA and also local projects (reference class forecasting) drove a number of decisions. The outcomes were less than favourable. So were these good decisions or bad ones? In hindsight the decision making process was optimistically biased as probabilities and extent of adverse outcomes were grossly underestimated and the effectiveness of mitigations/preventative actions was overestimated. Ironically these were the same problems that reference class forecasting was aimed at overcoming.
The bottom line is that whenever an undesirable outcome occurs hindsight will always show that the decision making process was flawed. So we either abandon the process and leave the outcomes to chance, or we rely on the knowledge and experience of the decision maker or we seek to improve the process. I for one faced with having to accept the outcomes of a decision would like to know more about the decision making process used and if this is based on using the expertise of the decision maker. I would also want to understand their biases. I would ask questions such as:
- was the problem, opportunity and objectives appropriately defined in measurable terms;
- were all appropriate stakeholders consulted or otherwise involved;
- were creative and doable alternatives considered and analysed;
- were these based on clear values and trade-offs;
- was the information used meaningful, reliable and appropriate;
- was the information complete;
- was logical reasoning used in deriving the conclusions;
- was the conclusion objective i.e. unbiased; and
- did the decision include a clear commitment to action?
- Baron, Jonathan (1994), Thinking and deciding (2nd ed.), Cambridge University Press
- Hardman, David (2009), Judgment and decision making: psychological perspectives, Wiley-Blackwell
- Kahneman, Daniel; Paul Slovic, Amos Tversky (1982), Judgment under Uncertainty: Heuristics and Biases, Cambridge, UK: Cambridge University Press
- Plous, Scott (1993), The Psychology of Judgment and Decision Making, New York: McGraw-Hill
This post expresses the views of this author and not necessarily of their organisation or the Global CCS Institute.