3 March 2014

At the end of February 2014, the US Fed allowed some banks with sophisticated risk management systems to use their internal models for calculation of capital requirements.

Most of the internal models, we believe, are statistical analysis based on significance level and confidence interval from observed historical data, or some simulation drawn from features of historical data.

I believe one of the most relevant overview paper on statistics is “The Philosophy of Statistics” by Dennis V. Lindley published in Journal of the Royal Statistical Society in year 2000^{1}

We quote the following paragraphs from this paper:

“…..Statisticians to use measures of uncertainty that do not combine according to the rules of the probability calculus.

Consider a hypothesis *H*, that a medical treatment is ineffectual, or that a specific social factor does not influence crime levels. The physician, or sociologist, is uncertain about *H*, and data are collected in the hope of removing, or at least reducing, the uncertainty. A statistician called in to advise on the uncertainty aspect may recommend that the client uses, as a measure of uncertainty, a tail area, significance level, with *H* as the null hypothesis. That is, assuming that *H* is true, the probability of the observed, or more extreme, data is calculated. This is a measure of the credence that can be put on *H*; the smaller the probability, the smaller is the credence.

The usage files in the face of the arguments above which assert that uncertainty about *H* needs to be measured by a probability for *H*. A significance level is not such a probability. The distinction can be expressed starkly:

significance level-the probability of some aspect of the data, given *H* is true;

probability-your probability of *H*, given the data.

The prosecutor’s fallacy is well known in legal circles. It consists in confusing *p(A/B)* with *p(B/A)*, two values which are only rarely the same. The distinction between significance levels and probability is almost the prosecutor’s fallacy: ‘almost’ because although *B*, in the prosecutor form, may be equated with *H*, the data are treated differently. Probability uses *A* as data. Adherents of significance levels soon recognized that they could not use just the data but had to include ‘more extreme’ data in the form of the tail of a distribution….the level includes data which might have happened but did not.”

Regulators would do well to revisit the bank models in the light of the fundamental criticism as enunciated above, before risking the economy and again face the consequences of faulty use of scientific methodology.

_________________________

^{1}A version of this paper is available at http://www.jstor.org/stable/2681060

Share on
Facebook
Twitter
Linkedin