Vous êtes sur la page 1sur 4

Measuring Impact of Scholarly Journals

Measuring Impact of Scholarly Journals


M S Sridhar*
It is extremely difficult to operationally define and measure scientific impact. As a multidimensional construct no single indicator can adequately measure scientific impact. In the past, Nature (17 June 2010) went to basics, carried out a poll result and analysed the responses to suggest that there is a mixed feelings about using metrics to assess contribution of scientists. A clear, simple, easy and objective way of counting publications of scientists ( and their use/ download and citations) may be required, but other hard-to-quantify (qualitative) aspects like teaching, mentoring, team building and service to community cannot be overlooked. Unless consciously guarded numbers have a selective sweeping quality. Britain is considered to be metrics-heavy in its assessments.

Journals, the prime vehicles of scholarly communication, are subjected to some kind of assessment to rate or rank and assign relative credits to measure scholarly impact and to enable individual authors and institutions to consolidate these credits depending on the number of articles published in these journals for a given period. Such credits of authors and institutions are extensively used for relative assessment for funding, awards, promotion, etc. Broadly, there are three important bases for such rating of journals: i) Citation-based rating ii) Opinion-based rating, and iii) Usage-based rating. Obviously, the opinion based ranking is subjective and the other two depend on some metrics, namely, citation count and usage count. The citation and usage measures have lead to several derived measures, particularly the citation and usage network analysis measures. For example, eigenfactor.org uses network theory algorithm 9similar to Page ranking algorithm of Google) to measure the impact of journals by considering how they are cited by other influential journals. Impact factor (IF) provides the relative frequency with which the journals average paper has been cited. It is defined as the ratio of all citations (in the year y) to papers published in x (usually 2) years to the total number of source items published in the years. As a scientomertic tool IF has been deployed by studies on the social stratification in science and the operation of peer review system for funding scientific research. Widely prevalent citation based measure IF has been strongly criticized for its limitations like self-citations, counting flaw in multiple authorship, homographs, synonyms, limited type of sources covered, implicit nature of citations, fluctuations and variations across disciplines and over time, incompleteness of the ISI-

database, dominance of English as a scientific language, the American as well as gender bias, M S Sridhar
Page 1

Measuring Impact of Scholarly Journals

negative citations, ceremonial citations, citing an authority without consulting the document, etc. Though IF is certainly a better measure than simple citation counts as high impact journals usually attract high quality contributions and scientists do give top priority to high impact journals to increase their visibility, prestige and influence among their peers, IF suffers from certain weaknesses.

A few highly cited articles significantly influence impact factor. There are too many (90%) un-cited or low-cited articles.

Authors and journals that publish review articles tend to have exaggerated citation counts and hence impact factor.

IF does not take into account articles that were used but did not get cited. IF fails to capture the long term value or the real impact of many journals. IF focuses on the popularity of the cited item, ignoring the prestige value of the citing one.

With the advent of Internet along with social networking and other Web tools, opinion-based collective rating (often rating of articles and not journals) by scholarly community as well as endusers and easy and accurate capturing of usage data in terms of number of clicks and/ or downloads provided two strong alternatives to citation-based counts. Opinion-based ranking, typically like elections in democracy, depend much on the active and impartial participation of competent scholars and end-users. Lot of open access archives provide extensive statistics of ratings of peers and download counts about journal articles. One example of post-publishing peer review for rating journals is Faculty 1000 at http://f1000.com Here the articles are evaluated by a Faculty Member according a rating of 'Recommended', 'Must Read' or 'Exceptional', equivalent to values of 6, 8 or 10, respectively and the same is used to arrive at F1000 Article Factors (FFa) for each article and to generate article rankings. The higher an article's FFa, the higher its consensus rating by the Faculty and therefore its ranking (Its F1000 Journal Factors represent a new ranking for journals based on FFa and taking the size of the journal into account is also underway for each year from 2007 to 2010 and will list current rankings on a rolling basis, updated monthly).

Citations being a subset of use of journal articles depend much on the extraneous considerations (i.e., limitations of citation count) of author to cite or not to cite. On the other hand, the hard and easily available use data depend much on the operational definition and other assumptions of use itself. Use of any document, in turn, depends on accessibility, ease of use, perceived utility, perseverance of users, etc. However, download counts (as opposed to citation counts) from online publications measured on a real-time-basis provide an early estimate of the probable citation impact of articles. There is a strong positive correlation between download counts and citation counts as well as IF, although the degree of correlation M S Sridhar
Page 2

Measuring Impact of Scholarly Journals

varied from one research field to another. Yet enough caution has to be taken to normalize the raw download data for size and nature of organization. There have been attempts to device new measures like Usage Impact Factor (UIF) and Reading Factor (RF) on line with citation impact factor and based on full text download as use data. UIF is the probability that an article published in a journal (based on all articles published within 2 years period) is used in a particular year. Note that this measure takes into account all the articles published during 2 year period and also all downloads of those articles over one year. RF is the ratio between the number of electronic consultations of an individual journal and the mean number of electronic consultations of all the journals studied {RFj = C / Cj/N}. Unlike UIF, RF can include number of clicks (in addition to number of downloads) under the definition of e-consultations (C). This measure tries to normalize use with respect to number of article published in a journal. N is the total number of journals in the study/ database. However it is desired to normalize use data to take care of variations in number of potential users for a given journal. Above all, like citation count, e-consultations are also affected by factors like novelty effect, accidental clicks, robotic clicks, etc and may not truly reflect extent of use. However, the span and diversity of journals covered for e-consultation is much better than that in JCR (journal Citation Ranking of ISI),

Bollen and others have made interesting observations from principal component analysis on 39 network analysis measures of scholarly journals derived from both citation and usage statistics. They separated theses measures along two dimensions, namely, rapid vs. delayed and popularity vs. prestige. Firstly, the present usage rates predict future citation rates, i.e.,

usage-based measures are rapid indicators of scientific impact as log data is more rapid than the citation measures. Secondly, the usage-based social network measures are stronger indicators of prestige than citation measures and the normalized citation measures such as IF, SJR and cites per document of journals indicate popularity. Thirdly, the set of usage measures is more strongly correlated than the set of citation measures, thereby indicating that greater reliability of usage data possibly due to large usage matrix used than citation matrix. Most surprisingly, the classical IF and SJR were found to rank 34th and 38th respectively among 39 measures assessed.

Indian Journals: Whatever the measure adopted the issue of according rate or rank to Indian journals is a tricky and far from satisfactory mainly for the reason that large majority of them are neither covered for citation analysis nor available online and hence dependable download counts are not comprehensively captured. Under these circumstances, adoption of a hybrid method of journal rating or ranking becomes inevitable. One such practice in vogue is that of National Academy of Agricultural Sciences (i.e., NAAS at http://naasindia.org/rating.html). It accords relevant weightage/ marks to journals with the objective of generating numeral criteria M S Sridhar
Page 3

Measuring Impact of Scholarly Journals

to measure publication output of Agricultural scientists for the purpose of screening them for admission to the Fellowship of the Academy. Marks assigned to journals depend on IF wherever the IF of journal is available (by way of mapping marks from 61. To 10.0 based on IF of the journal) and for journals (mostly Indian Journals) for which IF is not available, marks ranging from 1 to 6 are assigned depending on the rationalized grade of D, D+, C, C+, B and B+ accorded by Journals Rating Committee of the Academy. Another interesting case of rating Indian journals is that of medknowpub.com, an Indian commercial publisher, who publishes 148 health science journals both in print and online (e-journals are accessible free) with modified IF and claims to have a hit rate on par with world standard. In this case, citation scores from nonSCI journals have been added to SCI score of IF and modified in a tricky way.

In the years to come many direct and indirect journal rating systems based on above bases as well as combinatorial methods are likely to flourish on the Internet. However libraries and institutions cannot rely on global usage metrics. It is local usage data coupled with cost and alternate access/ availability in cooperating libraries or consortia that matters for ranking the journals subscribed or to be subscribed by libraries. Interestingly, both download and citation correlate highly with production of papers in a given institution. Conclusion: Having seen a wide variety of measures of impact of scholarly journals one has to be careful in the application of these measures. It appears that using a range of measures is better than relying on a single measure. The usage measures are more easily prone for manipulation than citation measures. Further, usage data require normalizing with respect to size, type, discipline, market size, etc. of the journal. The download data also could mislead if it is for popular and wide use by students as compared to scholarly use by researchers. Majority of measures including the classical IF are more suitable to assess journals than scientists. Above all, one has to take care of merits and pitfalls of these measures before use. ---------------------------------------------------------------------------------------------------------------------------*

Former Head, Library and Documentation, ISRO Satellite Centre, Bangalore 560017.

Address: 1103, Mirle House, 19th B Main, J P Nagar 2nd Phase, Bangalore 560078; Ph: 26593312; Mobile: 9964063960; E-mail: mirlesridhar@gmail.com

M S Sridhar

Page 4

Vous aimerez peut-être aussi