I am more and more strolling around the psychological sides of security and risks. Magne Jorgensen (Simula research lab, Norway) published a paper which result is counter-intuitive. Its title is Identification of more risks can lead to increased over-optimism of and over-confidence in software development effort estimates.
Through four experiments, he highlights that when people are going more in depth in risk analysis, it most often ends up with a lower effort estimation and higher success estimation than when people make a fast risk analysis!!
He proposes some potential explanations. Once more we end up with judgment-based (the Guts) versus reasoning-based (the Brain) (See Gardner’s book) Among the explanations:
- illusion of control; people are more confident when they believe to be in control. Seeing more risks may give an illusion of control. Identifying a risk is already a little bit controlling it.
- Availability heuristics: the more vivid in the memory, the higher the importance for the Guts. When analyzing risks, it is more probable that the most important ones will be find quickly whereas the last discovered ones will have the lesser probability. Unfortunately, the Guts will be biased by the last analyzed one for the overall risk. In other words, it will lower the global risk.
Jorgensen proposes a method to limit this bias. Analyze each risk and their impact together. Then sum the expected impacts.
May that study have some impacts in the way we make threat analysis? I am not sure. Threat analysis is a long process where the availability heuristic will probably be watered by time.
Nevertheless, it may impact the way we wrap up a threat analysis. Personally, I describe the threats in decreasing order of importance. In other words, the audience’s guts, when leaving the room, will remember the less important threats 🙁 I should present them in the increasing order. This would have two advantages: some thrill / suspense and the more dangerous threats in the Guts’ memory.