How do you decide when measurement results are “bad enough” (variation between plans and actuals) to merit further investigation or corrective action?
There is no “right” answer to this question because variations between plans and actuals must be interpreted within the context of the program, and depend on the program’s risk tolerance. In most cases, knowing when something is a “bad enough” problem is pretty obvious. People look not just at a single current variation, but also use the “preponderance of evidence” approach (e.g., integrated analysis) and consider what the trends suggest. Many organizations set “rules of thumb” for certain issues/indicators. A standard rule of thumb is to pay special attention to any indicator with a 20% variance overall or a 10% variance in any period.