INTRODUCTION
Publication of papers/scientific articles is important to share
experiences and create a platform for future generations to which they can refer if
needed. Presently the publications are important for other reasons too viz. career
advancement, competitive grants, awards etc. In the scientific field, people have
started talking about impact factor, citation index etc. and scientists have started
evaluating the quality of scientific journals and the articles published there
reference to these terms.
Research publications in Science citation Index (SCI) has become a
prerequisite for a researcher or a professional seeking placement in reputed
academic or research organizations. Indian Council of Medical Research (ICMR)
gives weight age to research publications in SCI journals from a research scholar
while considering him for selection for the award of research fellowship.
DEVELOPMENT AND HISTORY OF JOURNAL IMPACT
FACTOR
DEFINITIONS :
Earlier in 1970s, the Impact factor of a journal was defined as an
average number of citations of articles from a journal, in all journals and is
believed to be a measure of quality of the journal.
In 2004, Eugene Garfield stated that the term “Impact factor” has
gradually evolved to describe both journal and author impact. Journal Impact
factors generally involve relatively large populations of articles and citations.
In 2007, the Impact factor was defined as an index based on the
frequency with which a journal’s articles are cited in scientific publications, a
marker of journal quality. The impact factor of a journal reflects the frequency
with which the journal’s articles are cited in the scientific literature.
In 2009, Egghe interprets the Impact factor of a journal as the average
of a number of independent and identically distributed random variables’. Each
random variable represents the number of citations of one of the articles published
in the journal. Using the central limit therem, Egghe’s interpretation implies that
the Impact factor of a journal is a random variable that is (approximately)
normally distributed. Egghe also makes the assumption that for a given scientific
field each journal in this field can be considered as a random sample in the total
2
population of all articles in the field. This assumption has the implication that the
IFs of all journals in a field follow the same normal distribution.
HISTORY :
Dr. Eugene Garfield, the founder of the Institute for Scientific
Information, currently the Chairman Emeritus of Thomson Scientific,
Philadelphia first mentioned the idea of an Impact factor in Science in 1955. With
support from the National Institute of Health, the experimental Genetics Citation
Index was published, and that led to the 1961 publication of the Science Citation
Index. Garfield and Sher (1963) then created the Journal Impact Factor to help
select additional source journals. To do this, the author citation index was simply
re-sorted into the journal citation index. Hence, initially a core group of large and
highly cited journals needed to be covered in the new Science Citation Index
(SCI). Considering that, in 2004, the Journal of Biological Chemistry published
6500 articles, whereas articles from the Proceedings of the National Academy of
Sciences were cited more than 300000 times that year. Smaller journals might not
be selected if we relied solely on publication count, thus Garfield in 1972 created
the Journal Impact Factor (JIF).
Even before the Journal Citation Report (JCR) appeared, Garfield
(1972) sampled the 1969 (SCI) Science Citation Index to create the first published
ranking by impact factor. Today, the JCR includes every journal citation in more
than 6000 journals about 15 million citations from 1 million source items per year.
The precision of impact factors is questionable, but reporting to 3 decimal places
reduces the number of journals with the identical impact rank. A journal’s impact
3
factor is based on two elements : the numerator, which is the number of citation in
the current year to items published in the previous two years, and the
denominator, which is the number of substantive articles and reviews published in
the same two years. The impact factor could just as easily be based on the
previous year’s articles alone, which would give even greater weight to rapidly
changing fields. An impact factor could also take into account longer periods of
citations and sources, but then the measure would be less current.
Calculation and determination of Impact Factor (IF) :
Impact factors are calculated yearly for those journals that are indexed in
Thomson Reuter’s Journal Citation Reports.
According to Institute of Scientific Information (ISI), the ‘Impact Factor’
of a journal is calculated as follows :
For example, the 2010 impact factor of a journal would \be calculated as
follows :
A = total cites in 2010
B = 2010 cites articles published in 2008-09 (this is a subject of A)
C = number of articles published in 2008-09
D = B/C = 2010 impact factor
4
(The impact factor 2010 will be actually published in 2011, because it could not
be calculated until all of the 2011 publications had been received. Impact factor
2011 will be published in 2012).
Science Citation Index (SCI) and its companion publication Journal
Citation Report (JCR) are databases of the Institute of Scientific Information (ISI),
Philadelphia, USA. SCI and JCR are the most sought after worldwide information
source (both retrieval and discmination) and are used for evaluation and
comparison of scientific journals. All these papers that appear in SCI journals are
considered to have an impact on the scientific advancement.
The Open Access Impact Factor (OAIF) can be calculated as follows :
Merits and Limitations of Impact Factor (IF)
Merits :
1) The use of impact factor as an index of journal quality relies on the theory
that citation frequency accurately measures a journal’s importance to its
end users.
2) It provides quantitative tool for ranking, evaluating, categorizing and
comparing journals.
3) It eliminates some of the bias of such counts which favour large journals
over small ones or frequently issued journals over less frequently issued
ones and of older journals over newer ones. Particularly in the latter case
5
such journals have a larger citable body of literature than smaller or
younger journals.
4) There have been many innovative applications of journal impact factors.
The most common involve market research for publishers and others.
5) It provides librarians and researchers a tool for the management of library
journal collections.
6) In market research, the impact factor provides quantitative evidence for
editors and publishers for positioning their journals in relation to the
competition – especially others in the same subject category.
7) It may also serve to advertisers interested in evaluating the potential of a
specific journal.
8) Perhaps the most important and recent use of impact is in the process of
academic evaluation.
9) The impact factor can be used to provide a gross approximation of the
prestige of journals in which individuals have been published.
Limitations:
1) Review articles generally are cited more frequently than typical research
articles because they often serve as surrogates for earlier literature.
2) It is widely believed that method articles attract more citations than other
types of articles.
6
3) A little change affects the impact factor for two years after the change is
made.
4) Different specialties exhibit different ranges of peak impact.
5) It does not distinguish between letters, reviews, or original research.
6) It has inadequate international coverage.
7) The coverage is very uneven.
8) The practice of self-citation can be considered at many levels, including
author self – citation, journal self – citation and subject category self –
citation. This may increase the impact factor.
9) Very few publications from languages other than English are included, and
very few journals from the less developed countries.
10) The number of citations to paper in a particular journal does not really
directly measure the true quality of a journal, much less the scientific merit
of the papers within it.
11) It only reflects the intensity of publication or citation in that area and the
current popularity of that particular topic, along with the availability of
particular journals.
12) Journals with low circulation, regardless of the scientific merit of their
contents, will never obtain high impact factors in an obsolete sense.
13) An editor of a journal may encourage authors to cite articles from that
journal in the papers they submit.
7
14) By merely counting the frequency of citations per article and disregarding
the prestige of the citing journals, the impact factor becomes merely a
metric of popularity, not of prestige.
15) Many authors are biased to submit their research papers on the basis of
impact factor of a journal ignoring national journals and submitting only
those research articles, which have been rejected. This may worsen the
situation for any local journals.
Significance of Impact Factor :
There are many applications of journal impact factor. The most
important significance of the journal impact factors are :
Market research for publishers and others :- The impact factor
provides quantitative evidence for editors and publishers for
positioning their journals in relation to the competition – especially
others in the same subjects category. JCR (R) Data may also serve
advertisers interested in evaluating the potential of a specific
journal.
The most important and recent use of impact factor is in the
process of academic evaluation. The impact factor can be used to
provide gross approximation of the prestige of journals in which
individuals have been published.
8
As a tool for management of library journal collections, the impact
factor supplies the library administrator with information about
journals already in the collection and journal under consideration
for acquisition. These data must also be combined with cost and
circulation data to make rational decisions about purchases of
journals.
Use : The IF is used to compare different journals within a certain
field. The ISI web of knowledge indexes more than 11,000 science and social
science journals, and the results are widely (though not freely) available. In
addition, the If is an objective measure:
Validity :
The IF is highly discipline – dependant. The % of total citations
occurring in the first two years after publication varies highly
among disciplines from 1-3% in the mathematical disciplines
and physical sciences to 5-8% in the biological sciences.
The IF could not be reproduced in an independent audit.
The IF refers to the average no. of citations per paper, but this
is not a valid representation of this distribution and unfit for
citation evaluation.
In the short term – especially in the case of low impact factor
journals – many of the citations to a certain article are made in
papers written by the aurhor(s) of the original article. This
9
means that counting citations may be independent of the real
“impact” of the work among investigators. Garfield, however,
maintains that this phenomenon hardly influences a journals
impact factor. However, a study of author self-citations in
diabetes literature found that the frequency of author self-
citation was not associated with the quality of publications.
Similarly, journal self-citation is common in journals dealing in
specialized topics having high overlap in readership and
authors, and is not necessarily a sign of low quality on
manipulation.
10
CONTROVERSY OF IMPACT FACTOR - DOES THE
JOURNAL IMPACT FACTOR PROVIDE AN ACCURATE
MEASURE OF A JOURNALS’ IMPORTANCE?
In counting citations, only papers published in the past two years
are considered. In fact, many papers are appreciated after several years of their
publication and then referred and many other papers continue influencing others’
research for much longer period. Also, items such as news articles and editorials
that are the regular features of some journals are not counted in the denominator
of the impact factor, but citations to those new articles may be included in the
numerator, inflating the impact factor of journals that publish such articles. Due to
these and several other limitations, the impact factors may only poorly measure
the quality of a journal.
Review articles are often much more highly cited than the average
original research paper, so the impact factor of review journals can be quite high.
In some fields, there have been reports of journals that have manipulated their
impact factors by such tactics as adding news articles, accepting papers
preferentially that are likely to raise the journals’ impact factor, or even asking
authors to add citations to other articles in the journal.
MEASURING QUALITY, PRODUCTIVITY AND
ACADEMIC IMPACT OF INDIVIDUAL SCHOLARS
11
The impact factor is often misused to evaluate the importance of an
individual publication or evaluate an individual researcher. This does not work
well since a small number of publications are cited much more than the majority –
for example, about 90% of Nature’s 2004 impact factor was based on only a
quarter of its publications, and thus the importance of any one publication will be
different and an the average less than the overall number. The impact factor,
however, averages over all articles and thus underestimates the citations of the
most cited articles while exaggerating the number of citations of the majority of
articles. Consequently, the Higher Education Funding Council for England was
urged by the house of commons Science and Technology Select Committee to
remind Research Assessment Exercise panels that they are obliged to asses the
quality of the content of individual articles, not the reputation of the journal in
which they are published. If the journal impact factor is used to assess the
academic performance of individuals (for the purpose of selection, promotion, etc)
and it is not borne in mind that due to vast differences in the nature of distribution
of impact factors across the disciplines they are not justifiably comparable, a
below average scholar in the one discipline will rank higher and will be honoured
more than a better scholar in some other discipline.
Once we question the use of If for evaluation of an individual
scholar’s research quality, his/her productivity and the academic impact made by
his/her research work, we must propose some other measure that may be a
befitting substitute for the journal impact factor. Such an index is the h-index
12
(Hirsch, 2005) proposed by Jorge E. Hirsch, a physicist at the University of
California, San Diego.
The Hirsch Index (or h-index) is an index that attempts to measure
both the scientific productivity and the scientific impact of a scientist. The index
is based on the set of the scientist’s most cited papers and the number of citations
that have received in other researcher’s publications. As Hirsch defines it: a
scientist has index h if h of his or her Nos. papers (that is, the total number of
papers written by him/her) has at least h citations each and the other ((No-h)
papers have at most h citations each.
Tol (2008) noted that the main shortcoming of he h-index is that it
ignores the no. of citations in excess of h –Egghe (2006) and Jin (2006) therefore
introduced the g-index : Like the h-index, the g-index only counts papers of a
minimum quality. A higher g-index means more and better papers. Unlike the h-
index, the g-index also increases with the number of citations over the threshold.
CITATION IMPACT AND CITATION INDEX
Citation is the process of knowledge or citing the author, year, title
and locus of publication (journal, book or other) of a source used in a published
13
work. Such citations can be counted as measures of the usage and impact of the
cited work. This is called citation analysis or bibliometrics. Among the measures
that have emerged from citation analysis are the citation counts for:
An individual article (how often it was cited)
An author (total citations or average citation count per article)
A journal (average citation count for the articles in the journal).
Many measures have been proposed, beyond simple citation
counts, to better quantify an individual scholar’s citation impact. The best known
measures include h-index and the g-index.
What is indexing ?
An index is typically an alphabetical list with references to where
the names subjects etc. are mentioned in a book, Besides, indexing of a scientific
paper includes information about where to find it i.e., title, author(s), name of
journal, its year of publication, keywords and at times references cited in the
paper. Extensive indexing involves enumeration of the references cited in a
particular article and is used to generate citation statistics. This in turn, forms the
basis of Science Citation Index (SCI) and Journal Citations Reports.
A citation index is an index of citations between publications,
allowing the user to easily establish which latter documents cite which earlier
documents. The 1st citation indices were legal citators such as Shephards’
Citations (1873). (Shephard citations are a citatory, a list of all the authorities
14
citing a particular case, statute or other legal authority). In 1960, Eugene
Garfields’ Institute for Scientific Information (ISI) introduced the 1st citation
index for papers published in academic journals, starting with the Science Citation
Index (SCI) and latter expanding to produce the Social Sciences Citation Index
(SSCI) and the Arts and Humanities Citation Index (AHCI). The 1st automated
citation indexing was done by Citeseer in 1997.
Major Citation Indexing Services :
There are two publishers of general-purpose academic citation
indexes, available to libraries by subscription.
* ISI (now part of Thomson scientific), which publishes the ISI
citation indexes in print and compact disc. They are now generally accessed
through the web under the name web of Science, which is in turn part of the group
of databases in the Web of Knowledge.
* Elsevier which publishes scopus, available online only which
similarly combines subject searching with citation browsing and tracking in the
sciences and social sciences.
Citation Analysis :
While citation indexes were originally designed for information
retrieval purposes, they are increasingly used for bibliometrics and other studies
involving research evaluation. Citation data is also the basis of the popular journal
15
impact factor. Legal citation analysis is a citation analysis technique for analyzing
legal documents to facilitate the understanding of the inter-related-regulatory
compliance documents by the explorations of the citations that connect provisions
to other provisions within the same document or between different documents.
Legal citation analysis uses a citation graph extracted from a regulatory document.
The Importance of using citation indexes :
Citation indexing is a way to look forward in the literature from the
starting point of a particular paper or group of papers. This is a different and
complementary approach to ordinary word- based literature searching, which
looks backward in the literature from the present time. For eg. If we have an
excellent paper on a particular topic that was published in 2007, we can use
Science Citation Index to find papers published after 2007 that cited that paper.
So, by searching for latter paper citing our known paper we can find more
documents on the same or similar topic without using any keywords or subject
terms. Lately, newly established journal publishers encourage their authors to cite
their own papers from the publisher’s journals to increase the impact factor of
their own journals since journal paper publishing now-a-days tend to be business
oriented. We can easily find out how many times our paper has been cited.
Citation searching allows us to more forward in time by finding newer paper that
cite earlier papers.
Journal Citation Report provides a systematic, objective means to
evaluate the World’s leading research journals. It offers a unique perspective for
16
journal evaluation and comparison by accumulating and tabulating citation and
specialties in the sciences, social science and technology fields. Journal Citation
Report can show :
Most frequently cited journals in a field.
Hottest journals in a field.
Highest impact factors in a field.
Most published articles in a field.
Subject category data for benchmarking.
JCR comes with two editions :
JCR science edition : contains data from over 5,900
journals in 171 subject categories.
JCR social Science edition : over 1,700 journals in the 55
subject categories.
Relation between High Impact and Citation Index Journals :
Citation indexes track references that authors put in the
bibliographies of published papers. They provide a way to search for and analyse
the literature in a way not possible through simple keyword/topical searching. It
also enable users to gather data on the impact of individual authors and journals,
17
as well as assessing particular areas of research activity and publication. Citation
Indexing began in the 1950’s and has long been dominated by the Institute for
Scientific Information (now Thomson scientific), the creator and publisher of the
three citation indexes available today: Science Citation Index (SCI), Social
Science Citation Index (SSCI) and Arts and Humanities Citation Index (AHCI).
Citation analysis has blossomed over the past four decades. The field has now its
own International Society by Scientometrics and Informetrics. Lock (1989) aptty
named the application of bibliometric to journals evaluation “journalology”. The
citation density in the average number of references cited per source article and is
significantly lower for mathematics journal than for molecular biology journals.
The half life (i.e. number of retrospective years required to find 50% of the cited
references) is longer for physics journals. For same fields, the JCR 2 year period
for calculation of impact factors may or may not provide as complete a picture as
would a 5 or 10-year period. Nevertheless, when journals are studied by category,
the rankings based on 1-, 7-, or 15- year impact factors do not differ significantly.
When journals are studied across fields, the ranking for physiology journals
improves significantly as the number of year’s increases, but the ranking within
the category do not change. There are exceptions to these generalities. Critics of
the JIF cite all sorts of anecdotal citation behaviour that do not represent average
practice. Referencing errors abound, but most are variants that do not affect
journal impact, since only variants in cited journal abbreviations matter in
calculating impact. The impact factors reported by the JCR tacitly imply that all
editorial items in BMJ, JAMA, Lancet, New England Journal of medicine etc. can
18
be neatly categorized but such journals publish large nos. of items that are not
substantive in regards to citations. Correspondence, letters commentaries,
perspectives, news stories, obituaries, editorials, interviews and tributes are not
included in the JCR’s denominator. However, they may be cited, especially in the
current year. For that reason, they do not significantly affect impact calculations.
Nevertheless since the numerator includes later citations to these ephemera some
distortion will result, although only a small group of leading medical journals is
affected. The assignment of publication codes is based on human judgement.
There is a widespread belief that the size of the scientific
community that a journal serves significantly affects impact factors. This
assumption overlooks the fact that while more authors produce more citations,
these must be shared by a larger no. of cited articles. Most articles are not well
cited, but some articles may have unusual cross-disciplinary impact. There is a
skewed distribution of citations in most fields. The 80/20 phenomenon applies, in
that 20% of articles may account for 80% of the citations. The key determinants
of impact factor are not the number of authors or articles in the field but rather the
citation density and the age of the literature cited. The size of a field, however,
will increase the no. of super cited papers. While a few classic methodology
papers exceed a high threshold of citation, thousands of other methodology and
review papers do not. Publishing mediocre review papers will not necessarily
boost journals impacts.
The skewness of citation is well known and repeated as a mantra
by critics of the impact factor. If manuscript referring or processing is delayed,
19
references to articles that are no longer within the JCR’s 2-year impact window
will not be counted. Alternatively, the appearance of articles on the same subject
in the same issue may have an upward effect, as shown by opthef. For greater
precision, it is preferable to conduct item-by-item journal audits so that an
differences in impact for different types of editorial items can be taken into
account (Grafield, 1986). Some editors would calculate impact solely on the basis
of their most cited paper so as to diminish their otherwise low impact factors.
Others would like to see rankings by geographic or language group because of the
SCI’s alleged English-language bias even though the SCI covers European largely
German, French and Spanish – medical journals. Other objections to impact
factors are related to the system used in the JCR to categorize journals. The
heuristic methods used by Thomson Scientific (formerly Thomson ISI) for
categorizing journals are by no mean perfect; even though citation analysis
informs their decisions. The collective work by Pudoukin and Garfield (2002) was
an attempt to group journals objectively. They relied on the 2 ways citational
relationships between journals to reduce the subjective influence of journal titles
such as the Journal of Experimental Medicine – one of the top 5-immunology
journals (Garfield, 1972).
BEST PRACTICES FOR ONLINE JOURNAL EDITORS
Editors of online journals should uphold the highest standards of
craft and/or scholarly thoroughness, accountability and fairness, as do editors of
traditional print journals. However, there are additional dimensions to electronic
20
publishing. The advice that follows takes into account concerns shared by all
scholarly journals, regardless of medium, as well as concerns specific to online
publications.
(i) Peer Review, Editorial Staff and Editorial Board : Peer
Review, professional editors and a respected editorial board
are the foundations of learned journals. Many fine
publications publish good work via an editorial staff
(including contributing editors); however, the hallmark of a
learned journal, whether online or print, is peer review- a
process that typically entails careful evaluation often, but
not necessarily, anonymous of each submission by several
in-house and /or external experts in the field.
(ii) Affiliations : Electronic periodicals should clearly disclose
any affiliation with a publisher, university, scholarly or
professional society or other entity.
(iii) Mission statement, Submission guidelines, and Timely
review: Online journals, like their print counterparts, can
help both readers and potential contributors by developing
and prominently displaying a statement of mission.
(iv) Contract or Publication Agreement: Like any learned
journal, an online scholarly or creative periodical should
provide its authors with a contract or publication agreement
21
that spells out the rights and responsibilities of both the
publication and the author.
(v) Style : The editorial staff should adhere to a consistent style
of documentation that is appropriate to the journal’s
discipline (e.g. APA, MLA, Turabians chirago). In the case
of multimedia–intensive texts where the design encourages
non-standard documentation, the editorial staff may ask for
different styles appropriate to the individual text but still
accessible to readers.
(vi) Editing : Electronic periodicals should make sure the work
they publish is edited and proofed for proper grammar and
has no formatting, typographical, or spelling errors. Poor
editorial work indicates a lesser degree organizational
integrity.
(vii) Web Design: The principles of design in online journals are
significantly different from those in print journals.
Although some online journals model their design on the
look of print publications, others may foster and present
interactivity and provide opportunities for (i) authors to
design texts that make meaning from the advantages and
capacities of multimedia elements (linking, video etc.) on
the Web; (ii) viewers and readers to post their comments
22
for other readers and to communicate with the authors or
artists and with the editorial staff.
(viii) Timeliness and Regularity of Publication : Online journals
should state the periodicity of their publication (whether
annual, biannual, quarterly, bimonthly, monthly or other
schedules) and each issue should be clearly dated. Like
print publications, electronic journals should conform to
their announced schedule of publication.
(ix) Accessibility : Accessibility entails ease of use for different
constituencies.
(x) Availability : Online journals can determine how much or
how little of their content to readers through closed–,
partial–, a open– access models. If subscription (through
registration and /or payment) is required to view some or all
content, journals should provide clear subscription
information to readers.
(xi) Indexing and Abstracting : Reputable learned journals
make provisions for their scholarly and creative material to
be indexed by one or more indexing services, thus directing
interested researchers to the material contained in the
journal. An online journal should identify the indexing and
abstracting service that cover it.
23
(xii) ISSN : One distinguishing mark of a substantial journal is
its having secured an International Standard Serial Number
(ISSN). ISSNs are assigned to electronic publications when
they are serials or continuing resources. A journals ISSN
should be clearly displayed on its website.
(xiii) Archiving: Like print journals, online only journals should
maintain a complete archive of past issues and prominently
indicate how readers can gain success. Viable forms of
digital archiving currently include the Stanford University
locks system and the Electronic Collection of Library and
Archives Canada, among others, mirrors of websites can
also save as archival locations.
(xiv) Advertising : Electronic journals that derive funding from
advertising or from links to external sites should avoid
conflicts of interest.
TERMINOLOGIES ASSOCIATED WITH JIF
ISSN : An ISSN stands for International Standard Serial Number.
It is a unique 8 digit no. used to identify a print or electronic periodical
publication. The ISSN system was 1st drafted as an ISO international standard in
1975. The ISO sub committee TC 46/sc9 is responsible for the standard.
24
Code format :- The format of the ISSN is an 8 digit no. divided by
a hyphen into 4 digit numbers. The last digit which may be 0 – 9 or an x is a
check digit. The ISSN of the journal Hearing Research, for eg. is 0378-5955, the
check digit is 5.
To calculate the check digit, the following algorithm may be used –
Calculate the sum of 1st seven digits of the ISSN multiplied by its
position in the number, counting from the right i.e. 8, 7, 6, 5, 4, 3 and 2
respectively.
0.8 + 3.7 + 7.6 + 8.5 + 5.4 + 9.3 + 5.2
= 0 + 21 + 42 + 40 + 20 + 27 + 10
= 160
The modules 11 of this sum is then calculated by dividing the sum
by 11 and determine the remainder,
remainder
If there is no remainder the check digit is 0 else this modulus or
remainder value is subtracted from 11 to give the check digit : 11 – 6 = 5. 5 is the
check digit.
An upper case x in the check digit position indicates a check digit
of 10. To confirm the check digit, calculate the sum of all 8 digits of the ISSN
multiplied by its position in the number counting from the right (if the check digit
is x, then add 10 to the sum). The modulus 11 of the sum must be 0.
25
Code assignment : ISSN codes are assigned by a network of ISSN
National Centres, usually located at National Libraries and Co-ordinate by the
ISSN International Centre based in Paris. The International Centre is an inter
governmental organization created in 1974 through an agreement between
UNESCO and the French Govt. The International centre maintains a database of
all ISSN’s assigned Worldwide, the ISDS Register (International Serial Data
System) otherwise known as the ISSN register. The ISSN Register contains ISSN
codes and descriptions for more than one million periodicals with around 50,000
new records added yearly.
Comparison to other identifiers:
ISSN and ISBN codes are similar in concept, where ISBNs are
assigned to individual books. An ISBN might be assigned for particular issues of a
periodical in addition to the ISSN code for the periodical as a whole. An ISSN,
unlike the ISBN code, is an anonymous identifier associated with a periodical
title, containing no information as to the publisher or its location. For this reason a
new ISSN is assigned to a periodical each time it undergoes a major title change.
Since the ISSN applies to an entire periodical a new identifier, the
Serial item and Contribution Identifier was built on top of it to allow references to
specific volumes, articles or other identifiable components (like the table of
contents).
International Standard nos. and Codes :
ISBN : ISO 2108
26
ISSN : ISO 3297
ISRO (International Standard : ISO 3901
Recording Code)
ISNI (International Standard : ISO 27729
Name Identifier)
DOI (Digital object identifier System) : Draft+ISO 26324
ISTC (International Standard Text Code) : ISO 21047
ISBN : An ISBN stands for International Standard Book Number.
The ISBN is a unique numeric commercial book identifier based
upon the 9 digit Standard Book Numbering (SBN) code created by Gordon Foster,
Emeritus Professor of Statistics at Trinity College, Dublin, for the booksellers and
stationers W. H. Smith and others in 1966.
The 10-digit ISBN format was developed by the International and
was published in 1970 as International Standard ISO 2108. Currently the ISO’s
TC46/SC9 is responsible for the ISBN.
Since, January 2007 ISBNs have contained 13 digits, a format that
is compatible with Bookland EAN – 13s.
Occasionally, a book may appear without a printed ISBN if it is
printed privately or the author does not follow the usual ISBN procedure; however
this is usually later rectified.
27
Group identifier: The group identifier is a 1 to 5 number. The
single digit group identifier are : 0 or 1 for English speaking countries; 2 for
French speaking countries; 3 for German, 4 for Japan; 5 for Russian; 7 for
People’s Republic of China; 957 + 986 for Republic of China and 962 + 988 for
Hong Kong.
Check digits: A check digit is a form of redundancy check used for
error detection, the decimal equivalent of a binary checksum. It consists of a
single digit computed form the other digit in the message.
ISBN–10 : The 2001 edition of the official manual of the
International ISBN agency says that the ISBN – 10 check digit – which is the last
digit of the 10 digit ISBN – must range from 0 to 10 (the symbol x is used instead
of 10) and must be such that the sum of all the 10 digit, each multiplied by the
integer weight, descending from 10 to 1 is a multiple of the no 11. Modular
arithmetic is convenient for calculating the check digit using modulus 11. Each of
the 1st 9 digits of the 10 digits ISBN excluding the check digit, itself is multiplied
by a no in a sequence from 10 to 2, and the remainder of the sum, with respect to
11, is computed. The resulting remainder, plus the check digit, must equal 11;
therefore the check digit is 11 minus the remainder of the sum of the products.
Eg. The check digit for an ISBN – 10 of 0 – 306 – 40615? Is
calculated as follows :
= 0 + 27 + 0 + 42 + 24 + 0 + 24 + 3 + 10
28
= 130
11 – 9 = 2
Thus, the check digit is 2, and the complete sequence is ISBN 0 –
306 – 40615 – 2.
Formally, the check digit calculation is :
If the result is 11, a ‘0’ should be substituted, if 10, an ‘x’ should
be used. The two most common errors in handling an ISBN are altered digit or the
transportation of adjacent digit. Give 11 is a prime no. the ISBN check digit
method ensures that these two errors will always be detected. However, if the
error occurs in the publishing house and goes undetected, the book will be issued
c and invalid ISBN.
ISBN – 13 : The 2005 edition of the International ISBN Agency’s
official manual covering some ISBNs issued from January, 2007 describes how
the 13 digit ISBN check digit is calculated.
The calculation of an ISBN – 13 check digit begins with the first
12 digits o the thirteen digits ISBN (thus excluding the check digit itself). Each
digit, from left to right, is alternatively multiplied by 1 or 3, then those products
are summed modules 10 to give a value ranging from 0 to 9. Subtracted from 10,
29
that leaves a result from 1 to 10. A zero replaces a ten, so in all cases, a single
check digit results.
Eg. the ISBN – 13 check digit of 978-0-306-40615-? Is calculated
as follows :
= 9 + 21 + 8 + 0 + 3 + 0 + 6 + 12 + 0 + 18 + 1 + 15
= 93
reminder 3
10 – 3 = 7
Thus the check digit is 7, and the complete sequence is ISBN 978 –
0 – 306 – 40615 – 7.
Formally, the ISBN – 13 check digit calculation is
h-index: The h-index is an index that attempts to measure both the
scientific productivity and the apparent scientific impact of a scientist. The index
is based on the set of the scientist’s most cited papers and the number of citations
that they have received in other people’s publications. The index can also be
applied to the productivity and impact of a group of scientists such as a
department or university or country. The index was suggested by Jorge E Hirsch a
30
physicist at UCSD, as a tool for determining theoretical physicists relative quality
and is sometimes called the Hirch index or Hirch number.
A scientist has index h if h of [his/her] Np papers have atmost h
citations each, and the other (Np-h) papers have at most h citations each. The h
index can be manually determined using free internet database, such as Google
Scholar or free ware such as Publish or Perish. Subscription based database such
as Scopus and web of knowledge provide automated calculators.
Advantages : (i) Total no of papers does not account for the quality
of scientific publications, while total no of citations can be disproportionality
affected by participation in a single publication of major influence.
(ii) The h-index is intended to measure simultaneously the quality
and sustainability of scientific output, as well as to some extent, the diversity of
scientific research.
(iii) The h-index is much less affected by meth logical papers
proposing successful new technique, methods or approximates can be extremely
highly cited.
Disadvantages: (i) The h-index does not account for the typical
number of citation in different fields. Different fields or journals, traditionally use
different no of citation.
(ii) The h- index does not take into account self citation.
31
(iii) The h- index does not account for the number of authors of a
paper. In the original paper, Hirsch suggested partitioning citation among co-
authors.
(iv) It does not account for singular successful publications.
Several recipes to correct for that have been proposed such as the g - index, but
none has gained universal support.
g-Index – The g- index is an index for quantifying the science
productivity of physicists and other scientists based on their publication record. It
was suggested in 2006 by Leo Egghe. The index is calculated based on the
distribution of citations received by a given researcher’s publications.
Given a set of articles ranked in decreasing order of the number of
citations that they received, the g –index is the (unique) largest number such that
the top g articles received (together) at least g2 citations or an average at least g
citations.
The g-index has been characterized in terms of three natural
axioms by Woeginger (2008). The simplest of these axioms states that by moving
citations from weaker articles to stronger articles, one’s research index should not
decrease.
The h-b index is an extension of the h- index suggested in 2005 by
Jorge E. Hirsch of the university of California, San Diego to quantify the
scientific productivity of physicists and other scientist based on their publication
record. The h-b index developed by Michael Banks of the Max Planck Institute
32
for Solid State Research in Germany, takes the index further by evaluating the
impact of compounds used in solid state physics and scientific topics in general .
An important recent development in research on citation impact is
the discovery of universality or citation impact patterns that hold across different
disciplines in the sciences, social sciences and humanities.
Immediacy index –
The immediacy index measures how frequently the average article
from a Journal is Cited within the same year as publication. This number is useful
for evaluating journals that publish cutting –edge research.
Calculation : The immediacy index is calculated based on the
papers published in a journal in a single calander year. Eg. The 2005 immediacy
index for a journal would be calculated as follows :
A = the number of times articles published in 2005 were cited in
indexed journals during 2005.
B = the number of article, reviews, proceedings or notes published
in 2005.
2005 immediacy index
ISI excludes certain article types (such as news items,
correspondence and errata) from the denominator.
33
Eigenfactor score and Article influence score :The Eigen factor
score is a rating of the total importance of a scientific journal. In a manner
reminiscent of Google’s page rank algorithm, journals are rated according to
the no of incoming citation, with citations from highly ranked journals
weighted to make a larger contribution to the eigenfactor than those from
poorly ranked journals. As a measure of importance, the eigen factor score
scales with the size of a journal. All else equal, larger journals have larger
eigenfactor scores. As such, Eigen factor scores are not directly comparable to
impact factor scores, which are a measure of per article prestige. To allow per
article comparisons using the eigenfactor approach, the article influence score
scales eigenfactor score by the no of article published by the journal and thus
in directly comparable to impact factor. Eigenfactor scores and Article
Influence Scores are calculated by eigenfactor.org. where they can be freely
viewed. Eigenfactor scores are intended to give a measure of how likely a
journal is to be used and are thought to reflect how frequently an average
researcher would access content from that journal. Eigenfactor scores are
measure of a journals importance and thus should not be used evaluate
individual scientists.
Check digit : A check digit id a form of redundancy check used for
error detection, the decimal equivalent of a binary checksum. It consists of a
single digit computed from the other digits in the message. With a check digit, one
can detect simple errors in the input of a series of digit, such as a single mistyped
digit, or some permutations of two successive digits.
34
Design : Check digit algorithms are generally designed to capture
human transcription errors. In order of complexity, there include :
Single digit errors, such as 1 2.
Transposition errors, such as 12 21
Twin errors, such as 11 22
Jumps transpositions error, such as 132 131
Jumps twin errors, such as 131 232
Phonetic errors, such as 60 16 (sixty to sixteen)
Examples : ISBN 13 (in use January, 2007) is equal to the EAN-13
code found underneath a book’s barcode. Its check digit is generated the same
way as the UPC except the even digits are multiplied by 3 instead of the odd digits
EAN (GLN, GTIN, EAN number administered by GS1)
EAN (European Article Number) check digits (Administered by
GS1) are calculated by summing the even position numbers and multiplying by 3,
then adding the sum of the odd position number. The final digit of the result is
subtracted from 10 to calculate the check digit (or left as is if already zero). A
GS1 check digit calculator and detailed documentation is online at GS1s website.
Other examples of check digits :
The tenth digit of the National Provider Identifier for the vs.
healthcare industry.
The Australian Tax File Number (based on modules 11)
35
The International Securities Identifying Number (ISIN)
Mayoclinic patient identification number includes a trailing check
digit.
Check sum : A checksum or hash a sum is fixed size datum
computed from an arbitary block of digital data for the purpose of deterting
accidental errors that may have been introduced during its transmission or storage.
The integrity of the data can be checked at any later time and recomputing the
check sum and comparing it with the stored one. If the checksum do not match,
the data was almost certainly altered (either intentionally or unintentionally) a
procedure that yields the checksum from the data is called a checksum function or
checksum algorithm.
The goal of checksum algorithms is to detect limited and accidental
modification such as corruption to stored data or errors in a communication
channel.
Error detection and correction : Error detection is the detection of
errors caused by noise or other impairments during transmission from the
transmitter to the receiver.
Error correction is the detection of errors and reconstruction of the
original, error free data.
Error correction may generally be realized in two different ways :
36
Automatic repeat request (ARG) (sometimes referred to as
backward error connection). This is an error control technique
whereby an error detection scheme is combined with request for
retransmission of erroneous data. Every block of data received
checked using the error detection code used and if the check fails,
retransmission of the data is requested done repeatedly, until the
data can be verified.
Forward Error Correction (FEC) : The sender encodes the data
using an error correcting code (EGC) prior to ransmission. The
additional information (redundancy) added by the code is used by
the receiver to recover the original data. In general, the
reconstructed data is what is deemed the most likely original data.
A repetition code is a coding scheme that repeats the bits across a
channel to achieve error-free communications.
Scopus : Scopus is a database of abstracts and citations for
scholarly journals articles. It covers nearly 18,000 titles from more then 50,000
international publisher, including coverage of 16,500 per-reviewed journals in the
scientific, technical, medical and social sciences (including acts and humanities)
fied. It is owned by Elsevier and is provided on the web for subscriber
Citator : In legal research, a citator is a citation index of legal
resources, one of the best known of which in the united states is Shepherds
Citations. Given a reference of a legal decision, a citator allows the researcher to
37
find newer documents which cite the original document and thus to reconstruct
the judicial history of cases and statutes using a citator in this way is colloquially
referred to a “shephardizing”.
Legal citation : It is the practice of crediting and referring to
authoritative documents and sources. The most common sources of authority cited
are count decisions (cases) statutes, regulations, govt. documents, etc.
38