36
InCites TM [email protected] http://researchanalytics.thomsonreuters.com/incites/

InCites TM

  • Upload
    genna

  • View
    51

  • Download
    0

Embed Size (px)

DESCRIPTION

InCites TM. [email protected] http://researchanalytics.thomsonreuters.com/incites/. InCites Overview. Annual Subscription to bibliometrics reports based on Web of Science - PowerPoint PPT Presentation

Citation preview

Page 1: InCites TM

InCites TM

[email protected]://researchanalytics.thomsonreuters.com/incites/

Page 2: InCites TM

InCites Overview• Annual Subscription to bibliometrics reports based on Web of

Science

• 1 Research Performance Profile Report (RPP) containing articles extracted from WoS with associated metrics

– Thomson Reuters staff support building dataset to client specifications

– Dataset may be based on author, address, topic, or journal

• Entire Global Comparisons (GC) sets containing aggregated comparative statistics for fields, countries, and institutions

– National

– Institutional by Region

– Multiple subject schema

2

Page 3: InCites TM

InCites Overview• Web based interface as well as data on demand

• Choices for delivery of a record set: FTP, MS Access

• Frequent Updates: Quarterly for RPP, Annually for GC (Spring)• Access to TR Web Services: WS Premium, Researcher ID Services • Data preparation support and training• Ability to create custom reporting with sharing/saving data• Links to WoS records: Limited View for non-subscriber, Full View for

subscribers• Licensed right to export reports and statistics about the institution for

display on Institutional Web Site for promotional or information purposes• Licensed Right to load data to Institutional Repository or local database for

institutional business purposes

3

Page 4: InCites TM

CRITICAL ANSWERS TO PRACTICAL QUESTIONS What is the published output of my institution in various disciplines of study

over the past 10 years?

What impact did this research have, how frequently has it been cited and where, by whom?

With which researchers and institutions is our faculty collaborating?

Which collaborations are most valuable, producing influential work – greatest return on investment?

How does my institution compare to our peer institutions in volume and influence of published work in particular fields?

Which programs within our institution perform best in terms of research output, producing research that is the most influential compared to other research in the world within those particular disciplines.

4

Page 5: InCites TM

THE DATA

5

Page 6: InCites TM

WEB OF SCIENCE• Selectivity and control of content- high, consistent standards• 11,000+ journals and 716 million+ cited references• Multidisciplinary- Science, Social Science, Arts/Humanities• Depth- 100+ years- including cited references• Consistency and reliability- ideal for research evaluation e.g. field

averages• Unmatched expertise- 40+ years of citation analysis and research

evaluation• Conference Proceedings- 12,000 conferences annually• Funding acknowledgments• The gold standard-used by over 3,200 institutions in more than 90

countries

6

Page 7: InCites TM

SOURCE OF DATA SET: WEB OF SCIENCE RECORDS

7

Page 8: InCites TM

THE METRICS• Absolute Counts• Normalised metrics (for journal, document type,

period, and category)• Golden Rule: Compare like with like• All document types included in RPP

8

Page 9: InCites TM

NO ALL PURPOSE INDICATOR

This is a list of a number of different purposes a university might have for evaluating its research performance. Each purpose calls for particular kinds of information.

Identify the question the results will help to answer and collect the data accordingly

9

Page 10: InCites TM

IS THIS A HIGH CITATION COUNT?

10

Page 11: InCites TM

CREATING A BENCHMARK-WHAT IS THE EXPECTED CITATION RATE TO SIMILAR PAPERS?

Articles published ‘Monthly Notices of the Royal Astronomical Society’ published in 2007 have been cited on average 13.87 times. This is the expected count

We compare the total citations received to a paper to what is expected

215 (Journal Actual) / 13.87 (Journal Expected) = 15.5The paper has been cited 15.5. times more than expected. We call this Journal Actual/Journal Expected 11

Page 12: InCites TM

PERCENTILE IN FIELD- HOW MANY PAPERS IN THE TOP 1%, 5% OR 10% IN THEIR RESPECTIVE FIELDS?

This is an example of the citation frequency distribution of a set of papers (author, journal, institution or subject category) . The papers are ordered none/least cited on the left, moving to the highest cited papers in the set on the right. We can assign each paper to a Percentile in the set.

100% 50% 0%

In any given set, there are always few highly cited papers (top 1%)

In any given set, there are always many low cited/ none cited papers (bottom 100%)

Only article types article, note, and review are used to determine the percentile distribution, and only those same article types receive a percentile value 12

Page 13: InCites TM

Executive Summary Source Articles Listing Author Ranking Summary Metrics Author Ranking with Self Citation Analysis Collaborating Institutions Field Rankings Custom Reports – How to Generate an Author Ranking for a Particular Field/Category Citing Dataset (view citing authors, institutions and journals)

RESEARCH PERFORMANCE PROFILES

13

Page 14: InCites TM

METRIC MEASURE IDENTIFY% Cited to %Un Cited % of papers in dataset that have received at least one

citeAmount of research in dataset with no impact

Mean Percentile Average Percentile for set of papers in dataset. Percentile is assigned to a paper within a set of papers from same subject category/year/ document type ordered most cited (0%) to least cited (100%)

Average ranking of papers in dataset. How well the papers perform compared to papers from same category/year/document type

Average cites per document

Efficiency (or average impact) of author papers Authors with highest/lowest average impact

Mean Journal Actual/Expected Citations

Average ratio for papers in dataset. Ratio is relationship between actual citations to each paper to what is expected for papers in same journal/ publication year and document type

Average performance of papers in dataset when compared to papers from same journal/ publication year and document type

Mean Category Actual/Expected Citations

Average ratio for papers in the dataset. Ratio is relationship between actual citations to each paper to what is expected for papers in same category/ publication year and document type

Average performance of papers in dataset when compared to papers from same category/ document type and publication year

Percentage articles above/ below what is expected

1% of papers are expected to be in top 1% percentile. Green bar indicates by what percentage the papers are performing better than expected. Red bar indicates the percentage by which the papers are performing lower that expected at a given percentile range

How well the papers in the dataset are performing at the percentile level.

SUMMARY METRICS

14

Page 15: InCites TM

SOURCE ARTICLE LISTING METRICSMETRIC MEASURE IDENTIFYTimes Cited Total cites to paper Highest cited papers

Second Generation Cites Total cites to a citing paper (dataset)

Long term impact of a paper

Journal Expected Citations Average Times Cited count to papers from same journal, publication year and document type

Papers which perform better or below what is expected compared to similar papers from journal and same period

Category Expected Citations Average Times Cited count to papers from same category, publication year and document type

Papers which perform better or below what is expected compared to similar papers in a subject category from same period

Percentile in Subject Area Percentile a paper is assigned to with papers from same subject category/year/ document type ordered most cited (0%) to least cited (100%)

Papers which rank the highest or lowest in their field from same period

Journal Impact Factor Average cites in 2009 to papers published in the previous 2 years in a given journal

Journals which have high or low impact in 2009

15

Page 16: InCites TM

METRIC MEASURE IDENTIFYTimes Cited Total cites to an authors papers Authors with highest /lowest total cites

to their papers

WOS documents Total number of papers by an author in dataset Authors with highest/ lowest number of publications

Average cites per document

Efficiency (or average impact) of author papers Authors with highest/lowest average impact

h-index An authors research performance. Publications are ranked in descending order by the times cited. The value of h is equal to the number of papers (N) in the list that have N or more citations

Authors with highest impact and quantity of publications in a single indicator

Journal Actual/Expected Citations

Average ratio for authors papers. Ratio is relationship between actual citations to each paper to what is expected for papers in same journal/ publication year and document type

Authors who’s papers perform above or below what is expect in respective journals. Compare authors in different fields.

Category Actual/Expected Citations

Average ratio for authors papers. Ratio is relationship between actual citations to each paper to what is expected for papers in same category/ publication year and document type

Authors who’s papers perform above or below what is expected in their respective subject categories

Average percentile Average Percentile for set of authors papers. Percentile is assigned to a paper within a set of papers from same subject category/year/ document type ordered most cited (0%) to least cited (100%)

Authors who ‘s papers on average rank high or low with regard to total cites in the respective fields the papers belong to

AUTHOR RANKING

16

Page 17: InCites TM

COLLABORATION AND RESEARCH NETWORK

METRIC MEASURE IDENTIFYTimes Cited Total cites to set of papers (collaboration) Institutions/countries with which the research

has the most impact (cites)

WOS documents Total number of papers published in collaboration with an institution/country

Institution/ countries with which your researcher collaborate the most

Average cites per document

Efficiency (or average impact) of papers Institution/ countries with which the research has the highest/lowest average impact

h-index Performance of a set of papers. Publications are ranked in descending order by the times cited. The value of h is equal to the number of papers (N) in the list that have N or more citations

Institutions/ countries with which the collaboration has the highest impact and quantity of publications as measured in this single number indicator

Journal Actual/Expected Citations

Average ratio for collaboration papers. Ratio is relationship between actual citations to each paper to what is expected for papers in same journal/ publication year and document type

Collaboration with an institution or country with which the papers perform above or below what is expect when compared to similar papers in their respective journalsCollaboration with best return on investment

Category Actual/Expected Citations

Average ratio for authors papers. Ratio is relationship between actual citations to each paper to what is expected for papers in same category/ publication year and document type

Collaboration with an institution or country with which papers perform above or below what is expected in their respective subject categories

Average percentile Average Percentile for set of collaboration papers. Percentile is assigned to a paper from a set of papers from same subject category/year/ document type ordered most cited (0%) to least cited (100%)

Collaborations with which the papers on average rank high (0%) or low (100%) with regard to their total cites in the respective fields the papers belong to

17

Page 18: InCites TM

GLOBAL COMPARISIONS• Web of Science document types included:

– Articles– Reviews

18

Page 19: InCites TM

Quick view of entire dataset

19

Page 20: InCites TM

Article Level Metrics

20

Page 21: InCites TM

Thomson Reuters value added metrics

21

Basic bibliographic information about the article (including the field)

Number of citations

The Journal Impact Factor from the latest edition of the Journal Citation Reports

Page 22: InCites TM

Thomson Reuters value added metrics

22

2nd generation citation data, the articles that have cited the citing articles

Page 23: InCites TM

Thomson Reuters value added metrics

Expected performance metrics. We calculate the number of citations a typical article would expect to receive. This is calculated for each Journal (JXC) and for each Category (CXC) these metrics are also normalized for the year and document type.

23

Page 24: InCites TM

Thomson Reuters value added metrics

Although, it is not displayed on this screen, we also calculate the ratio between the actual and expected performance. This provides meaning and understanding of the citation counts and is a normalized performance measure.

JXC Ratio (157 / 45.09) = 3.48CXC Ratio (157 / 3.66) = 42.90

24

Page 25: InCites TM

Thomson Reuters value added metrics

25

The percentile. As compared for the set of documents in the same field and the same year. This paper is in the top 0.2% of all papers in “General & Internal Medicine” for the year 2007The percentile is not calculated for all document types

Page 26: InCites TM

Total citation counts, mean and median citations, 2nd generation total citation counts and mean 2nd generation citation counts

Mean Actual / Expected Citation Ratio.

Mean Percentile

We generate summary metrics based on totals and averages of all the articles in the dataset

26

Page 27: InCites TM

Author reports with self cites removed

27

Page 28: InCites TM

GLOBAL COMPARISONS

28

Page 29: InCites TM

29

COMPARE COUNTRY / TERRITORIES

Data is available in tabular format with all metrics in one locationGraphical summaries make the data easy to interpret In this example we can see the Citation Impact of Sweden in selected fields compared to the world

Page 30: InCites TM

30

Normalized metrics are included for better understanding and relevant comparisons. In this example you can see the citation impact of selected countries normalized to the world average

COMPARE COUNTRY / TERRITORIES

Page 31: InCites TM

31

COMPARE COUNTRY / TERRITORIES

Various regional groupings, such as EU or Asia Pacific are included.

Page 32: InCites TM

32

There are different category classification schemes available. 250+ narrow categories from the Web of Science, 22 broad categories and 56 (including 6 broad) OECD classificationsThe inclusion of the OECD classification scheme makes for easy integration of InCites data with OECD data, such as R&D spendingRAE 2008 Units of Assessment for RAE comparisions

COMPARE COUNTRY / TERRITORIES

Page 33: InCites TM

33

HOW IS OUR RESEARCH COMPARED TO OUR PEERS?Focus on Nanoscience & Nanotechnology:

% Articles in Organization

Citation Impact

Page 34: InCites TM

34

Article count Normalized for field

Make comparison’s with your peer institutions effectively and easily. In this example: Normalized metrics demonstrate that Stanford Univ and Univ Sydney have significant output in the field of Economics & Business this information would ordinarily be difficult to identify

HOW IS OUR RESEARCH COMPARED TO OUR PEERS?

Page 35: InCites TM

AGGREGATE PERFORMANCE INDICATOR DIFFERENT RESEARCH FOCUS

• Harvard’s research is heavily focused on the Life Sciences, which are generally highly cited fields

• Princeton’s research focus is more biased towards less cited subjects such as Space Science, Physics, Mathematics and Geosciences

• The Aggregate Performance Indicator takes into account these differences in the research focus of the university and the different characteristics of the subject areas to generate a single metric

35

Page 36: InCites TM

36

AGGREGATE PERFORMANCE INDICATOR

Here we can see various metrics including the Aggregate Performance Indicator for selected global institutions.

We have see that Harvard produces many more Articles than Princeton and that the average Citations per Article is also higher.

However, as discussed in the previous slide, Harvard’s research focuses heavily on highly cited fields such as the Life Sciences which improves the gross citation statistics of Harvard. A more granular approach is required to understand the performance in difference fields.

The Aggregate Performance Indicator measure the relative performance and volume of research in each field and generates a single normalized metric which provides a more balanced approach when trying to compare the research performance of institutions.