Saturday, February 14, 2015

19. Basics of Testing of Hypotheses P- 07. Informetrics & Scientometrics

इस ब्लॉग्स को सृजन करने में आप सभी से सादर सुझाव आमंत्रित हैं , कृपया अपने सुझाव और प्रविष्टियाँ प्रेषित करे , इसका संपूर्ण कार्य क्षेत्र विश्व ज्ञान समुदाय हैं , जो सभी प्रतियोगियों के कॅरिअर निर्माण महत्त्वपूर्ण योगदान देगा ,आप अपने सुझाव इस मेल पत्ते पर भेज सकते हैं -

19. Basics of Testing of Hypotheses

P- 07. Informetrics & Scientometrics *

By :I K Ravichandra Rao,Paper Coordinator



Page Contents

  • To study an overview of testing of hypotheses
  • To study procedure/steps in testing of hypotheses and related concepts.

1 Introduction

Statistical analysis aims at inferring about a population based on the information/data contained in a sample. There are methods for making inferences which are usually based on statistical tests of hypotheses. Below we discuss only those aspects concerning testing of hypothesis.

A hypothesis is a well-defined statement. However, the word hypothesis in science generally refers to a definite interpretation of a given set of facts, which is put forth as a tentative assumptions and remain partially or wholly unverified. A simple definition of hypothesis as given by Luniberg “is a tentative generalisation, the validity of which remains to be tested. In this context testing of hypothesis, with relevant statistical data becomes important either to accept or reject the tentative assumption.

1.1 Why test Hypothesis?

Page Contents

Science does not accept anything as valid knowledge, until satisfactory tests confirm its validity. Therefore they need to be tested through research process for their acceptance or rejection. The hypothesis is normally tested by making use of a pre-defined assertion, rule which is applied to sample data and direct the research process in deciding to accept or reject the hypothesis. The process of testing hypothesis embodies the major part of research process. A hypothesis is tested on the basis of facts.

1.2 Types of Hypotheses and Notations

The two hypotheses in a statistical test are normally referred to as:
a)      Null Hypothesis, and
b)     Alternative Hypothesis.

a)   The Null Hypothesis is a very useful tool in testing the significance of difference. In its simple form, the hypothesis asserts that there is no true difference in the sample and population in particular matter under consideration and that thedifference found is accidental, unimportant, arising out of fluctuations of sampling. A simple definition of hypothesis is that it is a hypothesis which is being tested.

b)  The Alternative hypothesis specifies those values that the researcher considers to be true, and hopes that the sample data leads to acceptance of this whole hypothesis as true. In other words, when a null hypothesis is rejected, then alternative hypothesis is likely to be accepted. For example,
                  H0: µ = µ0
                  H1: µ ≠ µ0
where µ is the population mean and µ0 is the hypothesised value of the population mean.

1.3 Errors in Hypothesis Testing

Page Contents

When accepting or rejecting a null hypothesis, we may commit an error. For instance, we may reject Ho when it is correct; and accept Ho when it is not correct. These two errors are called Type I error and Type II error respectively. The probability of making a Type I error is denoted by α. The probability of making a Type II error is denoted as β. This is shown in the tabular form as below:
Conclusion of the Test
H True
H False
Accept H0
Wrong (Type II Error)
Reject H0
Wrong (Type I Error)

1.4 Empirical Test of Hypothesis

Page Contents

For the purpose of understanding the testing of hypotheses, let us discuss an ex­perimental situation. Consider a condition of verifying the manufacturer’s statement about its product. For example, let us take a case of investigating the container weights specified on the labels of wheat products of the manufacturer.  In order to demonstrate the hypothesis testing procedure, let us show how a test on label accuracy could be made for the company’s 2 kg packet of wheat flour.
The first assumption is that labels are correct. This assumption or hypothesis is subjected to a test by providing evidence regarding the truth of the claim or assumption. There are three possibilities in the case of 2 k.g. wheat flour packets.  It is possible that the mean weight for the population of 2 kg packets could be;
i)            ≥  2 kg or
ii)         ≤   2 kg or
iii)       =   2 kg.
In this situation, we have to determine whether or not the population mean (of wheat flour packets)  m = m0 (say, 2). How to determine? This is discussed below:
A statement like m  = m0 or  m  ≥ m0 or m  ≤ m0   is called a hypothesis. As said in section 1.2, a hypothesis that is being tested is called the Null hypothesis. It is denoted by Ho. The hypo­thesis that we are willing to accept if we do not accept the null hypothesis is called theAlternative hypothesis. The two hypotheses, the Null Hypothesis (Ho) and the Alternative Hypothesis (H1) are so constructed that if one is correct the other is wrong. It is denoted by H1. Generally, the Null Hypothesis and Alternative Hypothesis for testing the mean will be shown in the following forms:
                     Null Hypothesis           Alternative Hypothesis         
  • Case 1:     H: m  ≥  m0       or         H: m 
  • Case 2:     H: m  ≤  m0       or        H: m 
  • Case 3:     H: m  =  m0       or         H: ¹ m 
In establishing the critical value for a particular hypothesis testing situation, we always assume that the Ho holds as equality. This allows us to control the maximum probability of Type I error. Thus, in cases 1-3 above, the null hypotheses may be treated as H0: m  =  m0. Now with an assumption that the null hypothesis is true, let us select a sample from the population. If the sample results do not differ significantly from the assumed null hypothesis, we accept H0 as being true. If the sample results differ significantly from the hypothesis, we reject H  and conclude that the alternate hypothesis H1 is true. 


In z-test, the distribution of the test statistic under the null hypothesis is approximated by a normal distribution.  From the central limit theorem, we know that the sample mean Alternate Text  follows normal distribution with mean µ and standard deviation Alternate Text. That is, the variable Alternate Text follows asymptotically a normal distribution with mean zero and standard deviation one. Alternate Text is the sample mean; µ is the population mean;


Most often, as mentioned earlier, we test the null hypothesis Ho: µ = µ0 against one of the following alternative hypothesis:
1)  H: µ  < µ0
2)  H: µ  >  µ0
3)  H: µ  ≠ µ0
In such cases, the critical values (±zα or ±zα/2) are given by:
For (1): Alternate Text
For (2):  Alternate Text   and
For (3):  Alternate Text
Further, if H1: µ < µ0 or H1: µ > µ0, the test is also called one-sided test; otherwise, it is called two-sided test. The critical regions for the above three alternative hypotheses are shown in the following figures.
Alternate Text


Case 1: H0 : μ ≥ μ0    H1 : μ< μ0. -- One-Tailed Hypothesis Test About Population Mean
Here, we treat the null hypothesis as H0 : μ  = μ0  instead of H0 : μ  ≥ μ0, as explained in section 1.2.  In this case, the decision rule is Accept H0 if Alternate Text  ≥ c and Reject  H0   if    Alternate Text  < c, c is called the critical value for the test; it is given by c =  μ-  The value of   can be obtained from the normal distribution table for the given α.
Case 2: H0μ ≤ μ0  H1 : μ  > μ0.  -- One-Tailed Hypothesis Test About Population Mean
Here, we treat the null hypothesis as H0μ = μ0 instead of H0μ  ≥  μ0, as explained in section 1.2. Then the decision rule is Accept H0 if   Alternate Text   ≤ c and Reject H0 if  Alternate Text   > c. where c is called the critical value for the test; it is given by c = µ0 + zα . The value of zα can be obtained from the normal distribution table for the given α.
Case 3: H0μ = μ0 H1μ ≠ μ0.  -- Two-Tailed Hypothesis Test About Population Mean
In this case the decision rule is Accept H0 if  c1  ≤  Alternate Text  ≤  c2  Reject H0  if  Alternate Text < c1 or  if Alternate Text>  c2where  c1  =  µ0  -  zα/2Alternate Text and  c2  =  µ0  +  zα/2Alternate Text . The value of   can be obtained from the normal distribution table for the given α.
In all the above three cases, if  σ is unknown, then use the sample standard deviation (s); in that case, n must be sufficiently large; at least n > 30.

2.1 A Confidence Interval Approach to Test a Hypothesis of the Form H0: μ = μ0 H1: μ ≠ μ0

Select a simple random sample from a population and use the value of the mean  to develop the confidence interval. Let H0 : μ  = μ0    H1 : μ  ≠ μ0 . A sample of n observations gives a sample mean of  and gives  the standard error of


The following steps are involved in a test of significance:
Step 1: Formulate the null and alternative hypotheses. For example:
a)  H0 : μ  ≥ μ0    H1 : μ  < μ0 or
b)  H0 : μ  ≤ μ0    H1 : μ  > μ0 or
c)  H0 : μ  = μ0    H1 : μ  ≠ μ0. 
Step 2: Fix the value of α; that is, deciding, the level of significance. Usually, we fix a = 0.5 or a = 0.01.
Step 3: Select a sample of n units; compute sample mean Alternate Text . Then compute the following test statistic, under the assumption that the null hypothesis is true; so, replace the µ by µ0,while computing. That is,
Alternate Text
Further, we assume that α is known. However, if it is unknown, s can be used for a sufficiently large n)
Step 4: Determine the critical values. For (a) and (b)  the critical value c is given by µ0 - zαAlternate Text   and µ0 + zαAlternate Text  respectively.  The value of zα can be obtained from the normal distribution table for a given α. For (c), the critical values are given by c1  and c2; The c1  and c2 are given by µ0 - zα/2Alternate Text   and µ0 + zα/2Alternate Text  respectively.  The value of zα/2 can be obtained from the normal distribution table for a given α.  
Step 5: 1) For (a) and (b), accept the null hypothesis Hif  Alternate Text≥ c, otherwise reject H0.    For (c), accept H0, if c1 ≤ Alternate Text≤ c2 is true, otherwise reject H0.
               2)  Or if the computed value of |z| ≥ |zα| (for (a) and (b) ) or |zα/2| (for (c)) reject H0,   otherwise accept H0.


Alternate Text

Alternate Text


In statistical significance testing the p-value is the probability of obtaining a test statistic at least as extreme as the one that was actually observed, assuming that the null hypothesis is true; the p-value is the smallest value of α for which the given sample outcome would lead to accepting H0. The decision rule is to accept H0 if the p-value ≥ α and reject H0 if p-value < α. In the Example 1, above, the p-value is 0.0038 (i.e., P ( Alternate Text  > 2.92) which is also equivalent to P(z ≥ 2.66)); the p-value is < α and thus it leads to rejecting H0.


If α2 is unknown, it is usually estimated from the sample vari­ance. However, the sample variance is not a reliable estimate of α2. To get a good approximation for α2, we can use the formula: Alternate Text . 
In other words,Alternate Text  where  Alternate Text . We can now use instead of s2 in testing the null hypothesis. To test the null hypothesis, Ho: μ  = μ0compute the statistic: Alternate Text by replacing µ by µ0 and

6.1 Procedure for t-test

Step 1: Formulate the null and alternative hypotheses. For example,
a)  H0 : μ  ≥ μ0    H1 : μ  < μ0
b)  H0 : μ  ≤ μ0    H1 : μ  > μ0
c)  H0 : μ  = μ0    H1 : μ  ≠ μ0. 
Step 2: Decide the level of significance (a) and determine the critical values for a given degree of freedom from the t-table. For (a) and (b), the critical value is given by |tα| such that P(|t| ≤ |tα|)= l-α. For (a), the critical values are given by at |tα/2|such that .
Step 3: Compute t=Alternate Text under the assumption that µ = µ0.
Step 4: Accept the null hypothesis if t is less than the critical value; otherwise reject the null hypothesis.

6.2 Test of Difference between Two Population Means

Let us now deal with two populations for which standard deviations (α1 and α2) are known. However, the means (m1 and m2) are not known. Under the circumstances, can we test whether or not µ1 =  µ2; i.e., µ1 -  µ2 = 0? So in such cases, usually we would like to test
H: µ- µ2 = Do against a specified alternative hypothesis. It may be any one of the follow­ing:
H1 : µ- µ¹ Do
H1 : µ- µ> Do
H1 : µ- µ< Do
If Do=0, we are actually testing whether or not µ= µ2. To test whether or not Ho is true against a specified H1 we consider the difference between Alternate Text  and their distribution in repeated samples. It has been shown in the probability theory that for repeated independent randomly drawn samples, Alternate Text follows a normal distribution with mean µ- µ2 and standard devia­tion Alternate Text . 
Hence, Alternate Text  is a standardized normal variate.                      
If Ho: µ12= Do is true, z becomes
Alternate Text
Thus, we can use the z-test to test the null hypothesis. The procedure is similar to the z-test explained earlier. If X1 and X2 follow normal distribu­tions and if and αand α2 are unknown, for small samples, we can use the t-test. That is,
Alternate Text
t-statistic in this case has n+ n- 2 degrees of freedom. However, for large samples, even if α1 and α2 are unknown, we can use the z-test. In this case, we use s1 and s2 instead of α1 and α2. That is,
Alternate Text
where    Alternate Text     and    Alternate Text


A hypothesis that is tested with respect to the theoretical proportion of successes is thatP = Po (i.e. Ho: P = Po). An alternative hypothesis is that Alternate Text From the probability theory, we know that for a large n, the binomial distribution (with mean np and standard deviation Alternate Text  ) tends to a normal distribution. So, when we perform a binomial experiment, n times, if the null hypothesis Ho.: P = Po is true, then the following statistic:
Alternate Text
is a standardized normal variate, wherein p is the proportion of successes in a sample. Hence, to test the null hypothesis, we can use z-test as discussed above.  The procedure involved in testing Ho is given below.
Step 1: Formulate the null and alternative hypothesis; for instance
Ho: P = Po  H1:  Alternate Text
Ho: P ≥ Po H1: P< Po
Ho: P ≤ Po H1: P> Po
Step 2: Fix the α value and then determine the critical value from the nor­mal distribution table. | Zα/2, | = 1.96 and 2.58 for a = 0.05 and 0.01 respectively. | Zα | = 1.64 and 2.33 for a = 0.05 and 0.01 respectively.
Step 3: Compute p and q (= 1-p) for the sample data.
Step 4: Compute the z-statistic, that is, Alternate Text ,  where Qo=l--Po.
Step 5: For two-sided test, case (a). That is, reject Ho if |z| > |zα/2|.
             For one-sided test, case (b): Reject Ho if z > zα           and
             For one-sided test, case (c):   Reject Ho if z <  - zα

7.1 Difference between Two Proportions

A general hypothesis that is tested regarding the theoretical proportion of successes (in two sample cases) is that Ho:P1- P2=Po against a specified alternative hypothesis. The alternative hypothesis may be any one of the following:
H: P1         - PAlternate Text P0
H: P1         - P> P0
H: P1         - P< P0
To test whether or not Ho is true against a specified H1, we consider the difference P1 andP2 and their distribution in repeated samples. It has been shown in the probability theory for repeated independent randomly drawn samples, that P1 - P2 follows a normal distribution with mean P1 - P2 and variance
 Alternate Text
Alternate Text
is a standardized normal variate. If Ho: P1-P2=Po is true, z becomes
 Alternate Text
However, if Po=0, we are actually testing H0: P1=P2 in which case z becomes
Alternate Text


The correlation coefficient is defined as:
 Alternate Text
n is the size of the sample, and  are the sample means of X and Y respec­tively. sx and syare the sample standard deviations of X and Y respectively. We may like to test the null hypothesis that ρ = 0 (where ρ is the population correlation coefficient) against a specified alter­native hypothesis. We are actually using r to test the hypothesis about ρ since r is an estimate of ρ. The testing is usually done by determining the calculated value of r as significantly different from zero. This can be done by using a t-test. For the purpose of testing the null hypothesis, the follow­ing t-statistic is computed
 Alternate Text
where the t-statistic is with (n—2) degrees of freedom. If the calculated value lies in the critical region, we have to reject the null hypothesis. If we accept the null hypothesis it means that there is no correlation between the two variables other than that due to chance.


The use of t-test or z-test requires an assumption that the sample data come from a normal or binomial population (or at least, the sample distribution must tend to a normal distribution for a large sample). Both the z-test and t-test are used to test the null hypothesis concerned with population means, variances and proportions. Hypotheses related to the independence of two criteria of classification, goodness-of-fit test, median of the population, etc. can be tested using the statistical tests called non-parametric tests. The non-parametric tests do not require many assumptions (like the z-test and t-test). A non-parametric tests is discussed below.

9.1 Chi-square Test

The Chi Square test is normally applicable in situations in which determination of population parameters such as the mean and standard deviation are not an issue.  The data in question falling into discrete categories and are presented in a contingency table. The entries in a contingency table are known cells. Let us consider the result of a survey of 100 adults (say, 50 females and 50 males). Let us say that it has been observed that among the 100 adults, 34 of them are library users and the rest are non-users. So, in this hypothetical example, we have two nominal variables sex and library use. On categorizing the 100 adults, using these two nominal variables, we get a contingency table like the one shown in Table below:
Alternate Text
Let us now try to find out whether or not the two categories—gender (Male or Female) and library use—are independent; let us assume that there is no rela­tionship between the gender and library use. Under this assumption, compute the frequencies in each of the cells. Such frequencies are called theoretical frequencies. They are usually referred to as the expected num­bers or expected frequencies. The logic for computation of the theoretical frequencies for the data given in the above Table is as follows:
We have 50 each of males and females; the ratio is 1:1. So we would expect that half of the library users are males and also half of the non-users are males, that is, out of 100 adults (N) we have 50 males (row total). Out of 34 users (column total), how many of them are males?
Using the cross multiplication technique, we have:
The number of male users = Alternate Text.
Similarly, we can compute the number of male non-users, female users and female non-users. The results are shown in bold face in the corresponding cells in Table 8.1. On generalizing the above logic, we can easily prove that the following formula can be used to obtain theoretical frequencies (Eij) in each of the cells:
 Alternate Text
ri. is the total number of observations in the ith row
cij is the total number of observations in the jth column
Eij is the expected/theoretical frequencies in the ijth cell (ith row and jth column).

9.1 Chi-square Test(Continue.. )

Degree of Freedom
Degrees of freedom are commonly discussed in relation to chi-square and other forms of hypothesis testing statistics. It is important to calculate the degree(s) of freedom when determining the significance of a chi square statistic and the validity of the null hypothesis.   It is obvious that the theoretical frequencies need not necessarily be equal to the observed frequencies. If they are equal, one could perhaps conclude that there is no relationship between the two variables. If they are not equal, the question is, "Is the difference between the observed and expected fre­quencies statistically significant?" To answer this question, we use a statistic called Alternate Text (chi-square). It has a parameter called degrees of freedom. The values of Alternate Text  for n degrees of freedom can be obtained from chi-square table. It has been shown in statistics and probability theory that the random variable Alternate Text  has Alternate Textdistribution with (n- 1) degrees of freedom. The Oij and Eij are the observed and theoretical frequencies in the ijth cell; Alternate Text = N. In the case of r * c contingency table, the degrees of freedom is given by (r — 1) * (c— 1) where r and c are the numbers of rows and columns respectively. Thus the Alternate Text  is equal to the sum over all cells of the squared differences between the observed and expected frequencies divided by the expected frequencies. We will reject the null hypothesis (in an analysis of 2*2 contingency table) at 0.05 level if the computed value of  Alternate Text is greater than the critical value of the chi-square (that is, 3.841); The critical values can be obtained from the Chi square table. For the data given in the above Table, the Alternate Text is given by:
 Alternate Text
Alternate Text = 2.8824 + 1.4848 + 2.8824 + 1.4848
Alternate Text= 8.7344
Since Alternate Text (8.7344) is greater than the critical value) of the chi-square for one degree of freedom, 3.814, we will reject the null hypo­thesis that the variables are independent. This implies that there may be reasons to believe that men and women differ in library use

9.2 Measures of Association

The statistical significance of the null hypothesis depends both on the strength of the observed relationship and the size of the sample. Tests of statistical significance indicate only the likelihood that an observed relationship actually exists in the universe; but they do not reveal the fact as to how strong the relationship is. Further, a relationship may be statistically significant being substantially important. There are a few measures that will describe the strength of the association between two nominal variables. They are:
1. Contingency coefficient -- Alternate Text
2. Phi-square measure -- Alternate Text
3. Cramer’s-V measure --Alternate Text , r and c are the number of rows and columns respectively.
All these measures are the functions of the chi-square. The values of these measures are zero when no relationship between the two variables exists which implies that the variables are independent; the value is one when the variables are perfectly related, which means that they are dependent. The maximum value of the contingency coefficient depends on the size of the contingency table; e.g. for 2x2 table, the maximum value is 0:707; for 3 X3 table, it is 0.816. In general, if the number of columns and the number of rows are equal to each other, the maximum value of c is given by
 Alternate Text

9.3 Goodness-of fit test

The chi-square statistic is also used to test the hypothesis that whether or not the probability distribution (of the population) is similar to that of the sample distribution. This type of test is often referred to as a goodness-of-fit test. The goodness-of-fit test is illustrated with following example. Examine whether or not the distribution of transactions follows a nega­tive binomial distribution for the data shown below.
Alternate Text
We will use the goodness-of-fit test for this purpose. The procedure is:
Step 1: Formulate the null and alternative hypothesis.
Ho: The sample data belongs to a population which follows a negative binomial distribu­tion.
H1: The sample data belongs to a population which does not follow a negative binomial distribution.
Step 2: Compute the parameters (such as mean, variance, etc.); estimate the parameters of the theoretical probability distribution (which is assum­ed in the null hypothesis). Use, as far as possible, the maximum likeli­hood estimators.
Step 3: Compute the probabilities under the assumption that the Ho is true.
Step 4: Compute the theoretical or expected frequencies (use the formula, that expected frequency is equal to n×P(x), where n is the sample size, and P(x) is the theoretical probability distribution function). In this case, P(x) is the mass function of the negative binomial distribution.
Step 5: Decide a and determine the critical region (for a = 0.05). Find out X2 for (k — 1) degrees of freedom; k is the number of frequency classes.
Step 6: Compute: Alternate Text ,  Oi is the observed frequency in the ith class; Ei is the expected fre­quency in the ith class, and k is the number of frequency classes.
Step 7: Reject Ho ifAlternate Text
The result of the goodness-of-fit test is shown in the Table below.
Thus, the  Alternate Text = 1.1472; the  Alternate Text (a = 0.05, the degrees of freedom is 5) = 11.070. Since  Alternate Text we accept the null hypothesis that the distribution of transac­tions in the population follows a negative binomial distribution.
Table 1: A goodness-of-fit test: Chi-square test

Alternate Text


In this Unit, we have discussed the basics of z-test, t-test and chi square test.


1         Anderson, David R.; Sweepney, Dennis J; and Williams, Thomas. A. (1981)  Statistics for Bussiness and Economics. Edition 2. International Edition. West Publishing Company. SanFrancisco.
2         Ravichandra Rao, I.K. (1983) Quantitative Methods for Library and Information Science. Wiley Eastern. New Delhi.
3         Yule, G.H. and Kendall, M.G. (1950) An introduction to theory of statistics. London, Charles Griffin and Company.

No comments: