इस ब्लॉग्स को सृजन करने में आप सभी से सादर सुझाव आमंत्रित हैं , कृपया अपने सुझाव और प्रविष्टियाँ प्रेषित करे , इसका संपूर्ण कार्य क्षेत्र विश्व ज्ञान समुदाय हैं , जो सभी प्रतियोगियों के कॅरिअर निर्माण महत्त्वपूर्ण योगदान देगा ,आप अपने सुझाव इस मेल पत्ते पर भेज सकते हैं - chandrashekhar.malav@yahoo.com
13. National Mapping of Science
Content Navigation (e-Text)
Browse through the whole course site from one location.
Objectives
The objective of this Unit is to discuss the the methodology of national mapping of science.
Summary
Several dimensions of national mapping of science is discussed. National mapping of science is based on many indicators; such indicators are based on the publication counts, citation counts, R & D budget, skilled manpoer statistics, etc.Methodology to undertake national mapping of science is also discussed.
Introduction
Scientific performance is essentially a multidimensional concept, which cannot be measured by a single universal indicator. There may be a number of imperfect or 'partial' indicators, each representing a different aspect of research performance, with varying degree of success. Nonetheless, publications in the refereed scientific journals constitute the most important indicator of research performance. Careful analysis of scientific output in the form of publications can provide deep insights for making inter-institution, inter-field and international comparison of research performance.
There are several dimensions of national science mapping and it can be used to study different aspects of the research output like
- Channels of communication used for communicating research results by different nations or institutions;
- Cross national assessment: How the research efforts distributed among different nations? Is it distributed evenly or is it concentrated only among few nations. It can also be used to study the regional distribution of scientific output in a country or assessment of different performing sectors like academic institutions or public funded research institutions etc.
- Inter-institution comparison: How the research effort distributed among different institutions? Which are the leading national and international institutions in the field, and what are their relative strengths and weaknesses?
- Inter-field comparison i.e. to assess the relative emphasis of different nations on different disciplines like physics, chemistry, mathematics, engineering etc or sub-fields within a broad discipline;
- Can help in developing activity and attractivity profiles of the identified nations and institutions in different fields of science and technology, based on the output in scientific journals, and to compare the two profiles;
- Can be used to examine the connectivity of research output of a nation to the mainstream science and its impact by examining the impact factor of journals where the research results are published and their pattern of citations;
- In examining the co-authorship and collaboration pattern for different nations and in different fields of science and technology;
- Identification of most prolific and highly cited authors in science and technology as a whole or a discipline of the same.
- Modeling the growth trends of world research output vis.-a-vis. of different nations under study.
Indicators on which mapping is base
The mapping exercise is basically based on publication counts and their citation counts. Publications are used to measure the quantity of output, while the citations are used for measuring the impact or influence of the scientific output.
Publication Counts
The count of publications in peer reviewed scientific journals is the most frequent measure of scientific performance and can serve as a basic S&T activity indicator. It constitutes a key element in every evaluation and their use is wide spread. The research produced by the institutions of a country, to a great extent, reflects the governmental science policy as well as national interests and priorities. Counting of scientific publication output is the most basic technique of scientometrics, in which the number of publications by an individual, an institution, a country or group of countries like ASEAN, SAARC, OECD, and EEC etc. is aggregated. By making use of publication counts it has been possible to point out the scientific centers, sub-centers and peripheries of world science. It can be helpful in finding out the outstanding scientist available in a country in a field.
Scientific output in the form of publications has been used to study the pattern of co-authorship and collaborations. Productivity data in case of a country can be used to build up science city map of a country. Such information would help science planners to strengthen those cities that need specific augmentation. It can also be used to map out and monitor the mobility of scientists as well. The mobility can be between various cities within the country as well as between two countries. Data on mobility between countries would be particularly useful for developing countries, where there is a need to establish strong links with scientifically developed nations.
Publication counts have been attacked mainly on the ground that they do not indicate the quality of work. Mere count of publications may lead to an incorrect inference about the contribution an individual makes to the extension of knowledge. In view of this, scientists have started using count of citations or surrogate measures of quality based on impact factor of journals developed by the Institute of Scientific Information (now Thomson Reuters, USA) and available in the Journal Citation Report published every year by the Thomson Reuters as a supplementary volume toScience Citation Index. The Journal Citation Report is available on the Web.
2.2 Citation Counts
While publication counts measure output, citation counts are considered to go one step further and address questions of impact, influence, and transfer of knowledge. Garfield suggested the technique of counting citations to individual papers in 1963. Citation counts are the basic data for national mapping exercises, and the most active area of scientometrics. Citation counts provide quantitative information on the visibility of the papers. The technique of citation counts rests upon the fact that scientists cite earlier publications, because the work contained in them is in some way relevant to their own. The basic assumption of citation is that it reflects the influence of an article relative to others and thus the impact of scientific research. The number of citations to a publication is generally recognized as an indicator of the influence of a piece of published work on the scientific output. However, citations have their own critics. The basic criticism against citation is that all citations are not made for scholarly reasons. There are other reasons for citations besides scholarship of the work. Other criticisms include inadequate coverage of journals byScience Citation Index Expanded, especially from Third World Countries, time lag between the date of publication and date of its citation, cost involved, field-to-field variations and the time period required for citations to achieve their highest frequency. In spite of these inadequacies of citations, empirical evidence suggests a high correlation between citation counts and other measures of impact, such as, location in a prestigious university, being listed in important biographies of scientists, receiving scientific awards and recognition by colleagues1.
Surrogate Measures of Citations
Another alternative to measure the impact of scientific performance is to use surrogate measures of citations based on the citation frequency of the journal in which the article appears. In this procedure, instead of counting actual citations received by an article in a certain time period, journal quality indicator weighs the article. In this procedure, the time lag due to citation process and the cost of acquiring citation data is drastically reduced. The use of journal quality indicator is based on the assumption that all papers appearing in a journal receive approximately the same number of citations. However, the same is not true.
The most commonly used journal quality indicator is Impact Factor (IF) suggested by Garfield2 and is annually available in Journal Citation Report. Besides the Impact Factor, the other measures are Journal Citation Score developed by Moed3, Influence Weight developed by Narin4. All these measures are independent of the size or periodicity of the journal as they are constructed on per article basis. However, these measures are not in vogue. Detailed description of Impact Factor is available in the succeeding paragraphs under indicators used for computing national performance.
3. Methodology to be adopted for undertaking mapping
Before undertaking a mapping exercise, the researcher should decide the use of a proper database that can be used for undertaking the mapping exercise and the time period for which the study is to be undertaken. The time period should not be too small like one or two year. It should be large enough to point out trends. However, the quantum of data to be used will vary according to the choice of the countries to be compared and the period for which the study is to be made. The conventional method of undertaking a mapping exercise was to prepare index cards for each identified record containing different bibliographic information of the publication like author(s) and their affiliation, subject studied, type of document used for publishing research results, type of collaboration, and other details of the record like country of publication of the journal, impact factor of the journals as reflected in Journal Citation Reports, and the citations received by the article. However, with the evolution of web based databases like Science Citation Index now Web of Science (Science Citation Index-Expanded) of Thomson Reuters and the Scopus of the Elsevier, the method have changed considerably. The data for a group of nations or for an individual country or a subject can directly be downloaded from these databases, which than can be converted into a database for analysis. Several variables that need downloading may be name of the author(s), affiliation of the author(s), and country of the author(s), type of publications i.e. journal articles, monographs, conference proceedings, reviews, letters, type of the institutions (academic, research, industrial), name of the journal with its country of publication etc. The data so downloaded is to be enriched with other information like impact factor or the normalized impact factor of journals in which the papers were published, type of collaboration, viz. local, domestic and international. After enriching the data, it is to be analyzed to meet the various objectives mentioned in above paragraph. Subject databases like BIOSIS, PUBMED can also be used for undertaking mapping exercise.
4. Indicators used for computing national performance
Several Scientometric indicators have been suggested in the literature to measure national performance. Some of these have been described below as describing them all is beyond the scope of this chapter.
Activity Index
Activity Index was first proposed by Frame5 and later elaborated by Schubert and Braun6. It characterizes the relative research effort a nation or an institution devotes to a given subject field or sub-field and takes into consideration the effect of the size of the country as well as the size of the sub-specialty. Activity Index (AI) is defined as follows:
AI = {(The country's share in world's publication output in the given field)/ (The country's share in world's publication output in all fields)} x100
Mathematically A I = {(Ni j / Ni o) / (No j / No o)} x 100
NI j: number of publications of country i in a field j;
NI o: number of publications of country i in all fields;
No j: number of publications of all countries in field j;
No o: number of publications of all countries in all the fields.
Here `all’ implies the countries included in the study.
The value of AI=100 indicates that the research effort of a country/institution in a given field corresponds precisely to the world's average; AI >100 reflects higher than average activity and AI <100 lower than average effort dedicated to the field. The major advantage of using activity index over raw (absolute) count of publications is that it takes into account both the size of the nation/institution as well as the size of the discipline.
Attractivity Index
Like the absolute publication output, the absolute impact is also confounded by the size of the country and size of the field. Hence, Schubert and Braun7 suggested Attractivity Index to calculate the impact. Attractivity Index characterizes the relative impact; the publications of a country/institution make in a given field or sub-field as reflected by the citations they attract. Attractivity Index (AAI) is defined as follows:
AAI = {(The country's share in citations attracted by publications in the given field)/ (The country's share in citations attracted by publications in all science fields)} X100
Mathematically AAI = {(Ci j / Ci o) / (Co j / Co o )} x 100
Ci j: Citations of country i in field j;
CI 0: Citations of country i in all fields;
Co j: Citations of all countries in field j;
Co o: Citations of all countries in all fields.
AAI =100 indicates that country's citation impact in the given field corresponds precisely to the world's average, AAI > 100 reflects higher than average, and AAI < 100 lower than average.
Impact Factor
At present, there is no better indicator applicable in practice characterizing the scientific impact of journals than the impact factor suggested by Garfield8. Although, in bibliometrics there are other indices as well like influence factor suggested by Narin9, its use has not become widespread. The Garfield's impact factor have, on the other hand became institutionalized knowledge. Garfield's impact factor "is basically a ratio of the number of citations a journal receives to the number of papers published over a period of time". Journal Citation Report gives yearly impact factors for the journal covered by Science Citation Index. The impact factor of a journal X for a particular year would be calculated by dividing the number of all the citations of articles journal X received during that particular year for the articles published by journal X in the previous two years.
Mathematically Impact Factor (IF) of a journal X for the year 2011 will be calculated as follows:
I F journal X for 2011 = {(Number of papers published by journal X in the year 2009 and 2010/ Number of Citations received by these articles in the year 2011)}
Impact Factor
At present, there is no better indicator applicable in practice characterizing the scientific impact of journals than the impact factor suggested by Garfield8. Although, in bibliometrics there are other indices as well like influence factor suggested by Narin9, its use has not become widespread. The Garfield's impact factor have, on the other hand became institutionalized knowledge. Garfield's impact factor "is basically a ratio of the number of citations a journal receives to the number of papers published over a period of time". Journal Citation Report gives yearly impact factors for the journal covered by Science Citation Index. The impact factor of a journal X for a particular year would be calculated by dividing the number of all the citations of articles journal X received during that particular year for the articles published by journal X in the previous two years.
Mathematically Impact Factor (IF) of a journal X for the year 2011 will be calculated as follows:
I F journal X for 2011 = {(Number of papers published by journal X in the year 2009 and 2010/ Number of Citations received by these articles in the year 2011)}.
Normalized Impact Factor
Impact factor of journals varies from one field of knowledge to another field; hence, it is necessary to normalize the impact factor, when comparing the performance in different disciplines. Several authors10 have suggested different methods to normalize the impact factor, but the method suggested by Sen11 is simple and can be applied easily to compute normalized impact factor of the journals. However, in this method review journals are not included while calculating the normalized impact factor as these has a high impact factor as compared to other journals.
Mathematically (NIF) i j = {(GIF) i j / Max (GIF) i j} x 10 where
NIF is the Normalized Impact Factor of journal i in sub-field j;
GIF is the Garfield’s Impact Factor of journal i in sub-field j, and;
Max (GIF) is the value of the highest impact factor in the set of journals.
Citation per Paper (CPP)
It is the most widely used indicator in bibliometric studies. It is a relative indicator computed as the average number of citations per publication. It normalizes the wide disparity in volume of literature published by prolific publishing nations and other smaller nations for a meaningful comparison of research influence. It is the ratio of total number of citations to the total number of publications. In case, where citations are not available, one can use normalized impact per paper which has been described below.
Normalized Impact per Paper (NIMP)
Based on the publication pattern and the normalized impact factor of the journals where the research results are published, normalized impact per paper suggested by Nagpaul12 can be calculated. Normalized impact per paper is basically the average, i.e. (Total Normalized Impact/Total number of papers).
Relative Citation Impact (RCI)
The indicator13 was developed by Institute of Scientific Information (now Thomson Reuters, USA) to calculate science and engineering indicators. RCI measures both the influence and visibility of a nation’s research in global perspective. RCI is a ratio of a country’s share of world citations (percent citations) to country’s share of world publications (percent publications). RCI = 1 indicates that country’s citation rate is equal to world citation rate; RCI > 1 indicates that country’s citation rate is higher than world’s citation rate and RCI < 1 indicate that country’s citation rate is less than world’s citation rate.
Relative Citation Rate (RCR)
The measure has been suggested by Schubert and Braun14. It is defined as the ratio of the actual number of citation received by a set of papers with expected number of citations. The expected number of citations is calculated by summing the impact factors of the periodicals where the publications appeared. The value of RCR equal to 1 indicates that the paper(s) received as much citations as it was expected to get. RCR > 1 indicates that the paper(s) received more citations than expected, and RCR < 1indicates fewer citations than expected. This indicator eliminates differences in the publication and citation practices of different subfields.
Number of High Quality Papers
The measure has been suggested by Nagpaul15. For calculating number of high quality papers, one has to first calculate the average of the citation per paper or the average of the normalized impact factor. Based on the values of average of citation per paper or the average of normalized impact per paper, the value of the number of high quality papers can be obtained. Those papers will be considered high quality papers which have citation per paper or normalized impact per paper above a threshold (twice or more) than the average values of these indicators.
4.10 Publication Effective Index: Nagpaul16 has also suggested this measure. This indicates whether the impact of research of a country commensurate with its publication effort. This indicator is the ratio of the proportion of the total normalized impact (TNIMP %) to proportion of the publications (TNP %).
Relative Quality Index (RQI)
This indicator is the ratio of the proportion of high quality papers (NHQ %) to the proportion of total publications (TNP %), where NHQ % = (Number of high quality papers for a country or an institution / Total number of high quality papers) x 100. The measure relates the incidence of high quality papers in a field by a country or an institution. A value of RQI > 1 indicates higher than average quality, whereas the value of RQI < 1 indicates lower than average quality.
h - index
The measure was proposed by Hirsch17. The h-index of a scientist is [h] if [h] among his/her [N] articles have at least [h] citations each and other (i.e. remaining [N-h]) articles have fewer than h citations each. An h index, say, of 10 of a scientist means that among all the articles published by the scientist have received at least 10 citations each.
Beside the above mentioned indicators, several other indicators have been suggested in the literature. However, their description is beyond the scope of this chapter.
Illustrations
The application of the above indicators has been demonstrated below by using suitable examples from various fields.
Channels used for communicating research results by different countries
In several of the studies published in literature it has been observed that journal articles including reviews account for the maximum number of publications. Rest of the research papers may be published in conference proceedings, patents, technical reports, and letters to the editor, book chapters or books depending upon the field of study. For instance in a study undertaken by Garg and Padhi18 for international output in laser science and technology for the period May1990 - April1991 it was found that all countries of the world published the highest number of papers as journal articles which accounted for 74% of the world publication output in laser science and technology. Rests 26% were patents, technical reports and conference proceedings etc.
Cross national assessment of research output
This has been demonstrated using global output in the field of laser science and technology for the period May 1990-April 1991. Table 1 presents the data on the publication output and activity index of different countries in different sub-specialties of laser science and technology. The total output came from 50 countries, but is mainly concentrated in 14 countries listed in Table 1. From the data presented in Table 1, it is observed that like other fields of science and technology, in this field also, USA tops the list. This is followed by Japan and the erstwhile USSR. These three countries together produced about 70% of the total output. Further analysis of the data on AI indicates that AI for USA is almost equal for all the sub-specialties indicating that it has paid almost equal priority to theoretical, experimental and applications of laser research. As indicated by the values of AI for Japan, Germany and France, it is observed that the research effort in these countries is concentrated towards applications of laser research. All other countries except UK and Switzerland have given priority to theoretical laser research, while, UK and Switzerland have given priority to experimental laser research. From this it can be inferred that different countries emphasize on different specialties in a field of science and technology.
Table 1: Publication output and Activity Index of different countries in sub-specialties of Laser S&T
Country
|
B Articles (AI)
|
C Articles (AI)
|
D Articles (AI)
|
Total
| |||
USA
|
347
|
(100)
|
699
|
(102)
|
358
|
(96)
|
1,404
|
JPN
|
51
|
(47)
|
242
|
(112)
|
150
|
(128)
|
443
|
USSR
|
125
|
(136)
|
174
|
(96)
|
72
|
(73)
|
371
|
UKD
|
43
|
(75)
|
126
|
(112)
|
63
|
(102)
|
232
|
GERM
|
35
|
(97)
|
58
|
(82)
|
53
|
(137 )
|
146
|
FRA
|
30
|
(85)
|
71
|
(103)
|
41
|
(109)
|
142
|
CAN
|
29
|
(140)
|
30
|
(73)
|
25
|
(112)
|
84
|
ITA
|
14
|
(123)
|
20
|
(89)
|
12
|
(98)
|
46
|
PRC
|
27
|
(148)
|
31
|
(89)
|
14
|
(73)
|
72
|
IND
|
27
|
(182)
|
21
|
(72)
|
13
|
(80)
|
61
|
ISR
|
19
|
(154)
|
16
|
(66)
|
15
|
(113)
|
50
|
NLD
|
17
|
(153)
|
17
|
(78)
|
11
|
(92)
|
45
|
SWT
|
1
|
(11)
|
27
|
(146)
|
10
|
(99)
|
38
|
AUS
|
20
|
(202)
|
14
|
(72)
|
6
|
(57)
|
40
|
Total
|
785
|
1,546
|
843
|
3,174
|
B: Theoretical, C: Experimental, and D: Application
Attractivity profile of different nations in different sub-specialties
AAI helps to understand whether the field of highest activity is also the field of highest impact or not. The same has been demonstrated here using normalized impact factor in place of citations for calculating the attractivity index. The dataset is the same as has been used above for calculating AI. The results of attractivity index given in Table 2 indicate that AAI for U.S.A in all the sub-specialties of laser science and technology is almost equal like the activity index. However, in the case of Japan and Italy, the values of AAI are greater for experimental laser research unlike their activity index, which is higher in the sub-specialty of applications and theoretical laser research. The attractivity profile and activity profile for USSR, France, Canada, China, India, Israel, Netherlands, Switzerland, and Australia are similar.
Table 2: Attractivity profile of different countries in sub-specialties of laser S&T*
Country
|
B
Impact (AAI)**
|
C
Impact (AAI)**
|
D
Impact (AAI)**
|
Total
| |||
USA
|
1178
|
(100)
|
2436
|
(100)
|
990
|
(102)
|
4604
|
JPN
|
154
|
(54)
|
708
|
(120)
|
250
|
(107)
|
1112
|
SUN
|
133
|
(193)
|
109
|
(77)
|
26
|
(46)
|
268
|
UKD
|
113
|
(82)
|
297
|
(105)
|
125
|
(111)
|
535
|
DEU
|
95
|
(99)
|
183
|
(92)
|
96
|
(122)
|
374
|
FRA
|
94
|
(97)
|
203
|
(100)
|
85
|
(106)
|
382
|
CAN
|
93
|
(154)
|
93
|
(74)
|
49
|
(99)
|
235
|
ITA
|
30
|
(97)
|
68
|
(107)
|
22
|
(87)
|
120
|
PRC
|
58
|
(144)
|
77
|
(94)
|
19
|
(58)
|
154
|
IND
|
70
|
(194)
|
45
|
(60)
|
26
|
(87)
|
141
|
ISR
|
53
|
(153)
|
50
|
(70)
|
32
|
(112 )
|
135
|
NLD
|
46
|
(132)
|
72
|
(100)
|
17
|
(60)
|
135
|
SWZ
|
2
|
(7)
|
91
|
(146)
|
24
|
(97)
|
117
|
AUS
|
47
|
(172)
|
44
|
(78)
|
15
|
(67)
|
106
|
Total
|
2166
|
4476
|
1776
|
8418
|
**AAI rounded off to the nearest whole number. * Based on Publication Output in Scientific Journals.
Using similar methodology researchers can study the regional distribution of science in a country. For instance, in a study carried out by Garg and Dutt19 on the regional distribution of Indian science using the publication data for the year 1984, it was observed that science in India is mainly concentrated in the state of Uttar Pradesh, Maharashtra, West Bengal and Delhi with almost 50% of the Indian scientific output published by these four states. Four metropolitan cities namely, Delhi, Mumbai, Kolkata, and Bangalore published more than 53% of the Indian scientific output.
Inter-institutional assessment of research output
Using the methodology described in above paragraphs researchers can make an inter-institutional assessment of the research output. An analysis of the Indian research output in science and technology for the year 1997 indicates that the total Indian scientific output came from 1107 institutions located in different parts of India. Of these, 29 institutions contributed 85 or more papers and accounted for 45% of all publications. These institutes belonged to different performing sectors like academic institutions, engineering institutions, medical institutions and public funded research agencies like Council of Scientific and Industrial Research (CSIR), Indian Council of Agriculture Research (ICAR), and Indian Council of Medical Research (ICMR) etc. Table 3A given below presents the data on the absolute output and activity index of five most prolific Indian institutions in five broad disciplines and Table 3B gives data on absolute impact and attractivity index of these five institutes. Disciplines of higher AI and AAI have been marked bold. Values of AI and AAI for IISC were highest (141) for biological sciences, while in other two disciplines where it had higher values of AI; it had a low value of AAI. For the remaining three institutions, the values of AAI were higher for disciplines which had higher values of AI. In case of AIIMS, AAI were also quite high in biological sciences which had a low value of AI. Also an institution can be active in different fields and one institution can emphasize in more than one field.
Table 3A: Absolute output (AI) of five prolific Indian institutions in Science and technology
Institutions
|
Biological Sciences
|
Chemical
sciences
|
Engineering
Sciences
|
Medical
Sciences
|
Physical
Sciences
|
Others
|
Total
|
IISC
|
86(126)
|
96(82)
|
74(126)
|
10(11)
|
155(119)
|
126
|
547
|
BARC
|
14(32)
|
65(103)
|
77(208)
|
15(26)
|
131(159)
|
44
|
346
|
TIFR
|
33(93)
|
10(111)
|
6(20)
|
4(90)
|
189(280)
|
41
|
283
|
AIIMS
|
33(104)
|
0(0)
|
0(0)
|
216(517)
|
0(0)
|
4
|
254
|
BHU
|
52(173)
|
26(64)
|
29(112)
|
38(96)
|
52(91)
|
44
|
241
|
Others
|
Other 24 prolific institutions have not been shown in the Table
| ||||||
Total
|
1383
|
1878
|
1185
|
1821
|
2635
|
2165
|
11067
|
Table 3B: Absolute Impact (AAI) of five prolific Indian institutions in Science and technology
Institutions
|
Biological Sciences
|
Chemical
sciences
|
Engineering
Sciences
|
Medical
Sciences
|
Physical
Sciences
|
Others
|
Total
|
IISC
|
222(141)
|
261(114)
|
175(115)
|
24(12)
|
349(106)
|
295
|
1326
|
BARC
|
26(28)
|
156(113)
|
179(193)
|
24(20)
|
309(156)
|
99
|
793
|
TIFR
|
105(112)
|
26(19)
|
14(15)
|
6(5)
|
566(287)
|
73
|
790
|
AIIMS
|
80(135)
|
0(0)
|
0(0)
|
412(541)
|
0(0)
|
7
|
499
|
BHU
|
88(174)
|
40(54)
|
56(112)
|
64(98)
|
99(93)
|
79
|
426
|
Others
|
Other 24 prolific institutions have not been shown in the Table
| ||||||
Total
|
2734
|
3982
|
2691
|
3505
|
3730
|
4327
|
22969
|
IISC: Indian Institute of Science, BARC: Bhabha Atomic Research Centre, TIFR: Tata Institute of Fundamental Research, AIIMS: All India Institute of Medical Sciences, BHU: Banaras Hindu University
Impact of research output
The most prolific institutions made 48% of the total impact and 46% of all high quality papers published from India. Majority of the papers published by these institutes have appeared in journals originating from the scientifically advanced countries of the West. This indicates that the research performed at these institutes evoke considerably interest among the western scientific community, and thus forms a part of the mainstream science. Table 4 provides information about various impact indicators such as Normalized Impact per Paper (NIMP/paper), Publication Effective index (PEI), and Relative Quality Index (RQI). The average value of NIMP/paper for Indian publication output is 2.1. Among all the prolific institutions, TIFR had the highest value of NIMP/paper (2.8). Like the NIMP/paper, the value of PEI is also highest for TIFR closely followed by IISC. For BARC also the value of PEI is also more than 1. It implies that these institutes earn more impact than that is commensurate with their publication effort. The standing of different institutions on the basis of the incidence of high quality papers can be judged from the value of RQI. Here, also TIFR had the highest value (3.4) followed by AIIMS. This indicates that these institutes have more than average incidence of high quality papers and the remaining three have less than average incidence of high quality papers.
Table 4: Impact indicators of prolific institutions
Institutions
|
TNP
|
TNIMP
|
NIMP/paper
|
NHQ
|
PEI
|
RQI
| |
IISC
|
547
|
1326
|
2.4
|
55
|
1.2
|
1.2
| |
BARC
|
346
|
793
|
2.3
|
19
|
1.1
|
0.7
| |
TIFR
|
283
|
790
|
2.8
|
79
|
1.3
|
3.4
| |
AIIMS
|
254
|
499
|
2.0
|
33
|
0.9
|
1.6
| |
BHU
|
241
|
426
|
1.8
|
10
|
0.8
|
0.5
| |
Others
|
Other 24 institutions have not been shown in the Table
| ||||||
Total
|
11067
|
22969
|
2.1
|
903
|
1.0
|
1.0
| |
International connectivity of the research output
International connectivity of the research output can be examined by using parameters such as papers in non-SCI journals vs. SCI indexed journals, papers in domestic journals vs. international journals, impact factor of the journals where the research results are published, and the pattern of citations of the research output. If more number of papers is published in international journals indexed by SCI with high impact factor journals, then the research output is internationally connected. The above argument is based on the fact that international journals indexed by SCI having high impact journals has wider readership probability and hence reflects higher potential connectivity compared to those appearing in domestic journals which have less circulation as compared to international journals. Similarly, if significant number of papers published by a country is cited in the international literature, then that field of study is an integral part of the mainstream science. In a study undertaken by Jain and Garg20 in the discipline of laser science and technology, it was observed that laser science and technology performed in India were internationally connected and formed the part of the mainstream science.
Co-authorship and collaboration pattern
The research output can be used to study co-authorship and collaboration pattern. This has been dealt separately in chapter on scientific collaboration.
Modeling the growth trends of world research output vis.-a-vis. India
In a study undertaken by Jain and Garg21 on the world and Indian scientific output in the field of laser science and technology, it was found that the pattern of growth is similar to a S-shaped curve with an initial slow growth, followed by exponential growth and finally slowing down to its saturation level. Detailed description about modeling the growth trends will be discussed in a separate chapter on modeling.
Conclusion
National mapping thus can be used for identifying the national strengths and weaknesses on a comparative basis. For this, a group of different countries can be selected for comparison. By using different Scientometric indicators like activity index and attractivity index it can be identified whether the country is doing more or less research in a particular field or sub-field as compared to other nations. Is it doing better than others? Is it doing more research in a particular field compared to some other field, or is it doing better in one field compared to another. It can identify topics with significant increase in world publication output (hot topics); topics with significant decrease (cold topics); and topics with no significant increase or decrease in world publication output (stable topics). If a country publishes much less than the world average on a hot topic, it implies that the country has failed to pick up new developments and it needs some exploration. For stable topics, equal or above world average activity is a sign of healthy development, while a significant lower activity indicates a weakness. For cold topics, a significantly higher activity indicates that a country is putting too much effort on a topic, where scientific payoff is lean.
Calculations for different indicators
Activity Index = {(Nij/Nio)/(Noj/Noo)}x100 (1)
Based on data given in Table 1for USA for sub-field B, Nij = 347, Nio = 1404, Noj = 785, Noo = 3174
Using equation (1) above {(347/1404)/(785/3174)}x100 = (0.2471/0.2473) x100 = 99.9 = 100
Attractivity Index = {(Cij/Cio)/(Coj/Coo)}x100 (2)
Based on data presented in Table 2for USA for sub-field B, Cij = 1178, Cio = 4604, Coj = 2166, Coo = 8418
Using equation (2) above {(1178/4604)/(2166/8418)}x100 = (0.2558/0.2573) x100 = 99.4 =100
Normalized Impact Factor: Suppose we have a journal in a field whose impact factor is 1.3 and the highest impact factor of the field is 5.2. Than the Normalized impact factor will be (1.3/5.2) x10 = 2.5
Normalized Impact per Paper: Based on data given in Table 4
Total Normalized Impact (TNIMP) = 22969 and Total number of papers (TNP) = 11067
Hence Normalized Impact per Paper = (22969/11067) = 2.07 = 2.1
Citation per Paper (CPP) and Relative Citation Impact (RCI): In a study on forest fungal research, the total number of publications were 854 (22.5%) of the total global output. These papers received 13679 (26.0%) citations. Hence CPP = 13679/ 854 = 16 and RCI = 26.6/22.5 =1.2
Publication Effective Index (PEI): Using data in Table 4 above the value of PEI for IISC
TNIMP% = (1326/22969) x 100 = 5.77 and TNP% = (547/11067) x100 = 4.94
PEI = (TNIMP% / TNP%) = (5.77/4.94) = 1.168 = 1.2
Relative Quality Index (RQI): Using data in Table 4 above the value of RQI for IISC
NHQ% = (55/903) x100 = 6.09 and TNP% = (547/11067) x100 = 4.94
RQI = (NHQ%/ TNP%) = (6.09/4.94) = 1.2
References
1. Lindsey, D. Using citation counts as a measure of quality in science: Measuring what is measurable rather than what's valid. Scientometrics, 15(1989) 189-203.
2. Garfield, E. Citation analysis as a tool in journal evaluation. Science, 178(1960) 471-479.
3. Moed, H.F. et al, The use of bibliometric data for the measurement of university research performance. Research Policy, 14(1985) 131-149.
4. Narin, F. Evaluative Bibliometrics: The use of publication and citation analysis in the evaluation of scientific activity. Cherry Hill, New Jersey, Computer Horizons Inc, 1976.
5. Frame, J.D., Mainstream research in Latin America and Caribbean, Interciencia,2 (1977) 143- 148.
6. Schubert, A., Braun, T., Relative indicators and relational charts for comparative assessment of publication output and citation impact, Scientometrics 9 (1986) 281- 291.
7. Ibid 6
8. Op cit 2
9. Op cit 4
10. Garg, K.C., Kumar, S., Dutt, B. Simple technique to normalize impact factor of journals, DESIDOC Journal of Library and Information Technology 31(5) 2011, 371-376.
11. Sen, B.K., Normalized impact factor, Journal of Documentation, 48(3)1992, 318.
12. Nagpaul, P.S., Contribution of Indian universities to the mainstream scientific literature: A bibliometric assessment, Scientometrics 32(1) 1995, 11-36.
13. Kumari, G.L., Synthetic organic chemistry research: An analysis by scientometric indicators, Scientometrics 80(3) 2009, 559-570.
14. Op cit 6
15. Op cit 12
16. Op cit 12
17. Hirsch, J.E., An index to quantify an individual’s scientific research output, Proceedings of the National Academy of Sciences of the USA, 102(46) 2005, 16569-16572
18. Garg, K.C., Padhi, P., Scientometrics of laser research literature as viewed through the Journal of Current Laser Abstracts Scientometrics, 45 (1999), 251-268.
19. Garg, K.C., Dutt, B., Geographical distribution of the Indian science activity (in)Emerging trends in scientometrics edited by Nagpaul, P.S., Garg, K.C., Gupta, B.M.et al., Allied Publishers LTD, New Delhi, 1999
20. Jain, A., Garg, K.C., Laser research in India: Scientometric study and model projections, Scientometrics 23(3)1992, 395-415.
21. Ibid 20.