This paper, Statistical Inference: the big picture by Robert E Kass, has depicted the big picture of statistics now-a-day. Statistics, as a scientific discipline has gradually come to its heyday. Kass suggested a philosophy compatible with statistical practice, labeled here

At the beginning, Kass indicated that, the protracted battle for the foundations of statistics, joined vociferously by Fisher, Jeffereys, Neyman, savage and many disciples, has been deeply illuminating, but it has left statistics without a philosophy that matches contemporary attitudes.

First we briefly take a look at several major camps in today's statistics.

1. confidence, statistical significance, and posterior probability are all valuable inference tools.

2. Simple chance situations, where counting arguments may be based on symmetries that generate equally likely outcomes (six faces on a fair die; 52 cards in a shuffled deck), supply basic intuitions about probability. Probability may be built up to important but less immediately intuitive situations using abstract mathematics, much the way real numbers are defined abstractly based on intuition coming from fractions.

3. Long-run frequencies are important mathematically, interpretively, and pedagogically. However,

4. Similarly,

5. Statistical inferences of all kinds use statistical models, which embody

*, serve as a foundation for inference. Statistical pragmatism is inclusive and emphasizes the assumption for inference.***statistical pragmatism**At the beginning, Kass indicated that, the protracted battle for the foundations of statistics, joined vociferously by Fisher, Jeffereys, Neyman, savage and many disciples, has been deeply illuminating, but it has left statistics without a philosophy that matches contemporary attitudes.

First we briefly take a look at several major camps in today's statistics.

**, attempting to sweep aside the obvious success of these concepts in applied work. Meanwhile, Bayesian have denied the utility of confidence and statistical significance****frequentists have ignored the possibility of inference about unique events**despite their ubiquitous occurrence throughout science. Furthermore, interpretations of**posterior probability in terms of subjective belief, or confidence in terms of long-run frequency, give students a limited and sometimes confusing view**of the nature of statistical inference. Kass suggested that it makes more sense to place in the center of our logical framework**the match or mismatch of theoretical assumptions with the real world of data.****Statistical Pragmatism**1. confidence, statistical significance, and posterior probability are all valuable inference tools.

2. Simple chance situations, where counting arguments may be based on symmetries that generate equally likely outcomes (six faces on a fair die; 52 cards in a shuffled deck), supply basic intuitions about probability. Probability may be built up to important but less immediately intuitive situations using abstract mathematics, much the way real numbers are defined abstractly based on intuition coming from fractions.

3. Long-run frequencies are important mathematically, interpretively, and pedagogically. However,

**it is possible to assign probabilities to unique events, including rolling a 3 with a fair die or having a confidence interval cover the true mean**, without considering long-run frequency. Long-run frequencies may be regarded as**consequences of the law of large numbers rather than as part of the definition of probability or confidence**.4. Similarly,

**the subjective interpretation of posterior****probability is important as a way of understanding Bayesian inference, but it is not fundamental to its****use**: in reporting a 95% posterior interval one need not make a statement such as, “My personal probability of this interval covering the mean is 0.95.”5. Statistical inferences of all kinds use statistical models, which embody

**theoretical assumptions**. As illustrated in Figure 1, like scientific models, statistical models exist in an abstract framework; to distinguish this framework from the real world inhabited by data we may call it a “theoretical world.”**Random variables, confidence intervals, and posterior probabilities all live in this theoretical world**. When we**use a statistical model to make a statistical inference we implicitly assert that the variation exhibited by data is captured reasonably well by the statistical model, so that the theoretical world corresponds reasonably well to the real world**. Conclusions are drawn by applying a statistical inference technique, which is a theoretical construct, to some real data. Figure 1 depicts the conclusions as straddling the theoretical and real worlds. Statistical inferences may have implications for the real world of new observable phenomena, but in scientific contexts, conclusions most often concern scientific models (or theories), so that their “real world” implications (involving new data) are somewhat indirect (the new data will involve new and different experiments).**FREQUENTIST ASSUMPTIONS.**

Suppose X1,X2,. . . , Xn are i.i.d. random variables from a normal distribution with mean μ and standard deviation σ = 1. In other words, suppose X1,X2, . . . , Xn form a random sample from a N(μ, 1) distribution.

Noting that ¯x = 10.2 and √49 = 7 we define the inferential interval

I = (10.2 − 27, 10.2 + 27).

The interval I may be regarded as a

**95% confidence interval**. I now contrast the standard frequentist interpretation with the pragmatic interepretation. FREQUENTIST INTERPRETATION OF CONFIDENCE INTERVAL. Under the assumptions above, if we were to draw infinitely many random samples from a N(μ, 1) distribution,

**95% of the corresponding confidence intervals ( ¯X − 27, ¯X + 27) would cover μ**.

**FREQUENTIST INTERPRETATION OF CONFIDENCE INTERVAL.**

Under the assumptions above, if we were to

**draw infinitely many random samples from a N(μ, 1) distribution, 95% of the corresponding confidence intervals ( ¯X − 27, ¯X + 27) would cover μ**.

**PRAGMATIC INTERPRETATION OF CONFIDENCE INTERVAL.**

If we were to draw a random sample according to the assumptions above, the resulting confidence interval ( ¯X − 27, ¯X + 27) would have probability 0.95 of covering μ. Because the random sample lives in the theoretical world,

**this is a theoretical statement**. Nonetheless, substituting

(1) ¯X = x¯

together with

(2) ¯x = 10.2

we obtain the interval I , and are able to draw useful conclusions as long as our theoretical world is aligned well with the real world that produced the data.

The main point here is that we do not need a longrun interpretation of probability, but we do have to be reminded that the unique-event probability of 0.95 remains a theoretical statement because it applies to

random variables rather than data. Let us turn to the Bayesian case.

**BAYESIAN ASSUMPTIONS.**

Suppose X1,X2,. . . , Xn form a random sample from a N(μ, 1) distribution and the prior distribution of μ is N(μ0, τ2), with τ^2 >> 1/49 and 49τ^2 >> |μ0|.

**BAYESIAN INTERPRETATION OF POSTERIOR INTERVAL.**

Under the assumptions above, the probability that μ is in the interval I is 0.95.

**PRAGMATIC INTERPRETATION OF POSTERIOR INTERVAL.**

If the data were a random sample for which (2) holds, that is, ¯x = 10.2, and if the assumptions above were to hold, then the probability that μ is in the interval I would be 0.95. This refers to a hypothetical value ¯x of the random variable ¯X, and because ¯X lives in the theoretical world the statement remains theoretical. Nonetheless, we are able to draw useful conclusions from the data as long as our theoretical world is aligned well with the real world that produced the data.