Why isn theire a tighter link between type of red flag and type of safeguard?
It would be neat if there was. We searched for a simple mapping – but it doesn’t really exist. Sometimes there is a simple relationship. For example, Misleading experience can be solved sometimes through more data and analysis – whereas the same data and analysis is unlikely to help counterbalance a strong inappropriate self interest. However, often there is not. For example, a decision group can be used to address all four red flag conditions. The nature of the red flags may affect important details such as who should be in the group or the precise group process used. But, we found that any attempt to turn this into an algorithm or structured process tended to create to much complexity to be worth recommending. In Issac Asimov’s Foundation trilogy the world has advanced to the point where it is possible to predict the decisions that people will make and how to shape them – but this is a fictional account set in the far future!