There are various measures of risk: variance, VaR, conditional VaR, downside risk measures, and EVT (extreme value theory). Is one better than the others?
When we measure the size of errors in our models (i.e., risk) with one single number, whatever that number, we trade off different characteristics of our uncertainty. By and large this happens in two ways. The first trade-off is between measuring the risk inherent in business-as-usual situations versus the risk inherent in extreme events. For example, VaR concentrates on business-as-usual risk; EVT concentrates on tail risk (i.e., extreme events). The second trade-off is between 1) risk measures that are good for some distributions but perform poorly for others (e.g., variance performs well for Gaussian distributions and poorly for distributions with fat tails) and 2) risk measures that perform reasonably well for a broader set of distributions (e.g., robust risk measures such as median absolute deviation or MAD). There are other considerations. For example, risk measures must be coherent. Coherence places a number of constraints on risk measures. Subadditivity is one such constraint.