Otwarty dostęp

Methods and Practices for Institutional Benchmarking based on Research Impact and Competitiveness: A Case Study of ShanghaiTech University


Zacytuj

Introduction

University benchmarking and assessment has long been a heated area of research and practices with many approaches and rankings. However, there exist different types of universities with different missions and sizes, resulting in varied emphases on research, teaching, or industrial/regional development. Even within research-oriented universities, some are more focused on basic research, while other ones aim more at applications. Most general ranking systems cannot show the degree to which ShanghaiTech University (hereafter: ShanghaiTech) as a small scale research university with a mission for cutting edge basic research in internationally competitive areas, fulfils its mission. These ranking systems fail to pinpoint the university’s strengths and weaknesses, and hence are useless for development planning. Hence, a new, individualized, and differentiating assessment method is needed, with special emphasis on research impact and competitiveness.

The past decades witnessed a worldwide growth of research activities and an increasing demand for return on research investment. Considerable effort has been devoted to benchmark research performance at the international, domestic, and intramural levels (Abramo & D’Angelo, 2011). Yet, assessment of research activities is complex and contentious, and researchers, funders and decision makers try to adapt with ever improving and expanding methods and indicators (James et al., 2015). However, measuring research performance remains a challenge all over the world (Huang et al., 2017). Traditional assessments are usually composite index based, taking many dimensions together, such as research investment (in labor force and in funding), output, quality, impact, social impact, etc. (Moed & Plume, 2011). Most indicators are calculated through bibliometric measurements such as publication and citation counts, patent counts, translational contracts, and even alternative metrics. Established citation databases, such as Web of Science (WoS) and Scopus, extended with specialized analytic tools such as InCites and Scival, are the main sources providing commonly used indicators. Waltman (2016) presented an in-depth overview of the main bibliographic databases and indicators, and made a distinction between basic size-dependent and size-independent indicators. Size-dependent indicators, such as total number of citations, number of highly cited publications, and the h-index, provide an overall performance measurement; size-independent indicators, such as average number of citations per publication, proportion of highly cited publications, reflect the research quality of units across different sizes of the target entities (James et al., 2015; Waltman, 2016). Besides these basic indicators, variations and extended indicators such as the normalized citation impact and the number of documents in the first quartile for journals give supplementary options for comparisons. These indicators and metrics have been widely applied to evaluate research entities (e.g. scientists, research groups, departments, institutions, research-intensive enterprises) (Ding et al., 2013; Ibrahim, 2018; Kumar & Pandit, 2018), with funding allocation, strategic planning, and government policy-making as their goals.

These approaches are used in university rankings to benchmark academic institutions. Modern rankings began in 1983, conducted by US News and World Report (Lukman et al., 2010), while today, more than 80 different university rankings have emerged (Jia, 2018). These rankings draw attention from a broad range of users such as would-be students or faculty, employers, government agencies, and institutions themselves. As a result many universities list their rankings in promotion materials. Some even set up targets in the ranking in their strategic plans (The University of Manchester, 2015). Each ranking is based on a variety of indicators, and many but not all of those are linked to research performance. Even with research activities, rankings bundle together multiple indicators different in nature and intended usage. For example, the Academic Ranking of World Universities (ARWU) includes three WoS related indicators, namely highly cited researchers, papers published in Nature and Science, total publications in Science Citation Index (SCI) and Social Science Citation Index (SSCI). Each of these indicators weighs 20% of the final score (ARWU, 2018). Research productivity (publications, 6%) and research impact (citations, 30%) on Scopus count a total 36% of the final weight in THE World University Ranking (Times Higher Education, 2019). In QS World University Rankings, the indicator of citations per faculty (total citations on Scopus) takes a proportion of 20% in the overall score (QS, 2019). Best Global Universities Rankings by U.S. News uses several WoS/InCites indicators, including total publication counts (10%), publications in the 10% most cited (22.5%), international collaboration (10%), highly cited papers (10%), total citations (7.5%), normalized citation impact (10%), leading to a total of 70% of the final score (U.S. News, 2018).

Although these and other ranking systems have continuously improved, they still face a number of intrinsic challenges. First, they do not differentiate institutions with different missions, sizes, and research emphases, leading to a bias toward large and more comprehensive ones. Second, many of them rely on composite indices, so that universities focusing on and excelling in only some aspects are disadvantaged in the overall rankings. Yet, recently some ranking systems began to give individually faceted rankings like research quality or translational capabilities (Shanghai Ranking, 2018). Thirdly, many indicator-based schemes fail to provide diagnostic power which institutions could use to improve their performance. Relying on limited types of documents or impact measures also affects the applicability of rankings when dealing with different types of institutions. For example, books and policy studies play an important role in the social sciences and humanities, but are not or only partially covered in the usual source databases, creating special problems with funders or assessment agencies in those fields. Some indicators are criticized for other reasons. For example, there is the well-known abuse of the journal impact factor (JIF) (Garfield, 1972), and disciplinary differences are often not accounted for. These and other problems contribute to tendencies of simply increasing the number of publications or ranges of research (Amin & Mabe, 2000). Efforts to correct some of the problems include the “Norwegian Model” designed in 2005 (Sivertsen, 2018). However, the Norwegian model still tries to balance the number and the quality of publications at the same time, so it may not be suitable in all cases and for all purposes.

With increasing applications of scientometric methods, the literature related to research indicators is now more inclined towards studying new indicators (Thelwall et al., 2015). However, over-reliance on single or composite indexes, how good they may be, still easily leads to biases and may hinder the development of research entities with specially focused missions (Vieira & Gomes, 2010). Furthermore, rankings and indicators can only present differences via numerical values, while the cause of these differences, stay beneath the veil. In addition, institutions have preponderant disciplines, therefore benchmarking the research performance of entire institutions while ignoring disciplinary or internal unit-based performance could easily fail to serve the diagnosis and planning purpose of institutions.

The San Francisco Declaration on Research Assessment (DORA) (ASCB, 2016) and The Leiden Manifesto (Hicks et al., 2015) discussed the problems and outlined good principles in research evaluation. In China, the General Office of the State Council issued “Opinions on deepening project, talent and institutional evaluation reform” emphasizing the establishment of a faceted assessment structure (Xinhuanet, 2018). Hence, flexible benchmarking methods geared to the specific missions and research emphases of individual and different institutions are needed. This paper describes the methods and practices ShanghaiTech employed for this purpose.

Methodology
Background

ShanghaiTech is a research university established jointly by the Shanghai Municipal Government and the Chinese Academy of Sciences (CAS) in 2013. It is set up as a small-scale, internationalized, and first-class research institution aiming at tackling globally advanced and hard-fought scientific challenges. It focuses its research only on a few selected subject fields in Physical Sciences & Technologies, Biological Sciences and Technologies, and Information Sciences and Technology, and avoids trying to cover complete disciplines. As of June 2019, it has fewer than 200 faculty members, no more than 500 undergraduate and 1000 graduate enrollments yearly. The university evaluates the research performance of itself, its schools and institutes, and its researchers, not in numbers of papers or average citations, but on their research quality, impact, competitiveness, and power or potential of leading internationally. So the common ranking schemes do not serve its mission, and they are even unfair to it. Consequently ShanghaiTech has strived to design an individualized mission-oriented assessment scheme to see how it is fulfilling its mission, and what and how each subject areas are performing in this regard. The university also needs to be compared, in specifically chosen measurements, to those institutions it aims to surpass. It takes this effort seriously and invests in continuous efforts reporting directly to the president of ShanghaiTech University.

Principles

The following principles are established to guide the design of the scheme:

Individually designed to suit the mission of ShanghaiTech, not pretending to be widely applicable without individualized adaptation.

Focusing on research impact and competitiveness.

Striving for the highest possible diagnostic power.

Using different indicators to provide different analysis angles, instead of composite indexes.

Enabling drilling down into sub-units, subject areas and even individual researchers to help development planning and faculty recruitment.

Specifically comparing to a selected group of top research universities

Being aware of the experimental nature and the limitations of so doing, and committed to improve it as the university develops.

Assessment questions

In this study, an evidence-based benchmarking effort is proposed and tested in practice; future improvements are proposed. These efforts are guided by the following assessment questions:

What are the high quality and high impact research outputs using commonly accepted indicators as available from reliable data sources, and how do their values compare to other institutions?

What is the relative competitiveness of research quality and impact when comparing ShanghaiTech with benchmarking institutions?

The same question as question (2) but now studied at the disciplinary level.

The same question as question (2) but now studied at the area level.

The same question as question (2) but now at the level of individual units and researchers (the latter was carried out in another study not reported here).

Many further questions can be explored to diagnose in-depth the how and why of reported strengths and weaknesses. However, it is felt that these explorations are better dealt with by research experts and decision-makers exploring the data interactively together with information analysts, including data outside the standard assessment databases.

Selection of assessment indicators

Evidence-based assessment relies on objective indicators; these indicators must serve the principles and assessment questions, following a chain of analysis from the mission, principles, assessment questions, to indicators, and then further analytic results or “assessment answers”.

The key underlining indicators are those representing research quality and impact, focusing on the number of high quality papers and their citation impact. A set of indicators are selected to represent research quality and impact, such as the number of papers published in the Q1 list of SCI source journals, highly cited papers according to ESI, or papers published in Cell, Nature, and Science (CNS, a category often used in China). Research impact is measured using citation-based impact indicators, including the Category Normalized Citation Impact (CNCI) (Clarivate Analytics, 2018). Competitiveness is demonstrated by the percentages of high quality papers and high impact papers from each institution in the benchmarking line-up. Total numbers of publications are still used but only as a reference background. For some analyses, international and domestic medians are calculated and displayed for easy comparison with ShanghaiTech’s values.

It is fully recognized that publishing in high quality journals does not necessarily represent high quality of any single paper itself, but when averaged across institutions or disciplines, the average CNCI achieved by an institution may give an interesting picture of the overall quality of the institution. It is also recognized that for some fields, especially in computer science and IT, proceeding papers are important research outputs, and for other fields patents need to be included to present a full picture of research performance. Due to the focus on basic research of ShanghaiTech, and because this study aims to evaluate ShanghaiTech’s research in terms of basic research, the current effort employs only data related to research papers. Future iterations of the study intend to include data on proceeding papers and patents, though these present challenges as what to choose and how to evaluate their “impact”.

Table 1 gives the set of indicators following the framework in Figure 1. Nearly all of the analytic angles can be applied to different research units (schools or institutes, disciplines, or groups) to enable drilling down analysis, etc. Size of the entities may vary, but the majority of indicators employed are size-independent, enabling assessment of relative competitiveness.

Figure 1

Flow diagram of assessment process.

Establishment of analytic dimensions and indicators for institutional benchmarking.

DimensionCategoryIndicatorDescriptionData Source
Research OutputPublication countsNumber of publicationsTotal publication of research papersInCites, Scival
Research ImpactCounts of high quality% Papers in Q1 journalsPercentage of publications in Q1 journals of JIF QuartileInCites
publication% Highly cited publicationsPercentage of papers ranked in the top 1% by citations in a period of timeInCites, Scival
% Hot publicationsPercentage of papers ranked in the top 1‰ by citations in a period of time (article and review)ESI
% Publications on CNSPercentage of papers published in Cell, Nature and ScienceWoS, Scopus
Impact of citationTimes cited H-indexNumber of citation of total publication H-index of the set of publicationsInCites, Scival WoS
Citation impactAverage (mean) number of citations per yearInCites, Scival
Normalized citation impactCitation impact that normalized for subject, year and document typeInCites, Scival
Research Topics (content analysis)Knowledge mapsContent analysis based on research topicsComparison of structure of maps constructed based on keywords pairing (reflect research topics) between Inst. S and benchmarking institutionsWoS, Scopus
Contribution to research FrontPublications as core papers in research fronts, and publishing year comparisonNumber of publications as core papers in research fronts, and the comparison between publishing year of these contributed papers and average publishing year of the total core papers in each research frontESI
Research ConnectionCollaborationMain collaborators and research areasTop collaborators with the most collaborative papers (contain co-authors), and the most collaborative research areas with each collaboratorWoS, Scopus
Selection of benchmarking institutions

As benchmarking is mission-oriented, the selection of benchmarking institutions is of the utmost importance. This selection must consider many factors of the targets, such as their nature, mission, scale, research emphases, etc. In line with the mission of ShanghaiTech to be a first-class research institution aiming at solving globally advanced and hard-fought scientific challenges, internationally top research universities must be chosen for benchmarking. Since ShanghaiTech is a Chinese university that will inevitably also be compared to top research universities in the country, a few top ones in China, which are internationally competitive, are selected too. Another consideration is the fact that ShanghaiTech is a brand new and experimental university of small-scale, internationalized, and selectively focusing on certain research areas. Hence, selected institutions must be as comparable as possible. Finally, to limit the workload of evaluators only a handful of institutions can be considered. A total of 8 international universities (Univ A—Univ H) are chosen from among the top 10 of the main world-wide rankings (e.g. U.S News, ARWU), and 5 domestic universities, selected from the top 5 among the Chinese institutions from the same rankings (Univ I—Univ M). Two newly established research-intensive universities are also added to the list (Univ N—Univ O). A research area-guided selection strategy, for those excellent in individual research areas similar to those of ShanghaiTech but not necessarily coming up high in ranking systems, was contemplated but is left to the second phase of this work due to workload consideration, though some on-request analyses of research competitiveness with selected niche institutions as benchmarking targets have been conducted for individual schools/institutes and even labs/teams.

Data and Results
Data sources

Since research in ShanghaiTech is mostly in scientific fields and only high quality output will be included in the assessment, SCI-Expanded of WoS Core Collection was chosen as the data source, and further limited to article and review papers and ten research areas in the ESI schema, namely physics, chemistry, biology & biotechnology, materials science, engineering, computer science, molecular biology & genetics, immunology, microbiology, and neuroscience & behavior. The timespan covers the period from 2014 to the retrieval date in 2018 (Sep. 21st) to coincide with the development period of ShanghaiTech. Some of the indicators can be obtained directly from data sources, while others are produced by InCites, and still others, such as the research content analysis to produce subject-area knowledge maps, is produced via analytic and visualization tools such as VOSviewer and Derwent Data Analyzer.

A large number of researchers from institutes of CAS work as adjunct professors in ShanghaiTech, and they publish papers with affiliations including ShanghaiTech. This makes it difficult to determine the contributions of different types of faculty members. For this reason three datasets of ShanghaiTech faculty were constructed. SHTech-A is the dataset for the entire faculty, full-time or adjunct, if any of the co-authors is from ShanghaiTech. SHTech-B is the dataset only for those authors in SHTech-A and which are first or corresponding authors. SHTech-C is the dataset consisting of those authors in SHTech-B which are full-time faculty members of the university.

General research performance of ShanghaiTech compared with benchmarking institutions

First, indicators with high quality papers, high impact papers, and their percentages out of each institution’s total were calculated, presented in Table 2. Compared with the international leading universities Shanghai Tech had lower percentages of high quality papers, received fewer citations, and established less international collaboration. Yet, it was, by many of the indicators, better than most domestic universities. Interestingly, SHTech-C had higher percentages of highly cited papers and CNCI values, indicating that its full-time faculty performs quite well.

Overall difference of research performance between ShanghaiTech and benchmarking institutions.

RegionInstitutionPublication% documents in Q1 journals% highly cited papers% hot papers% international collaborationh-indexcitation impactCNCI
Inter-Univ A19,50863.50%3.60%0.20%69.30%16014.72.03
nationalUniv B6,89865.00%4.90%0.50%50.60%11616.92.41
Univ C19,03464.80%5.40%0.40%51.50%19017.82.47
Univ D19,83462.50%3.60%0.30%68.60%15613.92.01
Univ E22,01366.00%5.70%0.50%55.90%19817.92.54
Univ F35,24964.80%4.70%0.40%55.60%22117.42.39
Univ G14,25263.30%2.80%0.20%66.40%11212.51.82
Univ H16,88565.30%4.40%0.30%52.90%16517.12.15
DomesticUniv I31,67350.40%2.40%0.10%32.70%1398.91.45
Univ J20,27852.00%2.30%0.20%33.00%1289.91.42
Univ K27,69945.90%1.40%0.10%31.70%1087.21.19
Univ L15,92748.50%1.60%0.00%31.30%1029.01.24
Univ M19,12352.10%2.60%0.10%30.10%1219.71.43
Univ N32,95847.60%1.60%0.10%18.30%1237.31.21
Univ O2,07453.10%2.30%0.10%41.50%487.61.6
SHTech-A1,55555.00%3.80%0.10%41.00%489.01.75
SHTech-B66052.00%3.00%0.00%37.30%275.61.58
SHTech-C22550.70%5.30%0.00%46.70%216.21.9
Research performance of ShanghaiTech compared with benchmarking institutions at discipline level

An analysis was conducted to see if the differences changed in various research areas. A ratio of papers in each discipline to the total publications of each institution was calculated. Knowing this ratio helps to put the numbers in perspective, avoiding comparing to those not really focusing on the chosen discipline. Similar indicators in the overall comparison were employed to benchmark at disciplinary level. International leading universities still had higher percentages of high quality papers, citation counts, and CNCI. But in some disciplines, ShanghaiTech performed better than domestic top institutions, and even came close to some international leading universities.

A representative discipline was selected to illustrate the approach. Table 3 gives the result in the area of Molecular Biology & Genetics in which almost all the institutions used for benchmarking have been active. Among all the benchmarking universities, Univ C & Univ E had better performance. We notice that the values of CNCI for Univ C and SHTech-A are 2.71 and 1.64, respectively, but SHTech-C has 2.6 in this regard, which is very close to Univ C and the international median, and significantly higher than the domestic median. This indicates that ShanghaiTech’s full-time professors in this research area are doing a better job, even when compared with its own adjunct professors who have worked in the institutes in the field and most have longer research engagement.

Disciplinary difference (Molecular Biology & Genetics) of research performance between ShanghaiTech and benchmarking institutions

RegionInstitution% total publication% documents in Q1 journals% Highly cited papers% Hot papersInternational % H-CollaborationindexCitation ImpactCNCI
Inter-nationalUniv A12.44%70.90%3.70%0.41%74.20%9023.52.5
Univ B5.55%72.85%7.00%0.78%38.90%552.7529.1
Univ C14.88%69.98%5.30%0.35%51.70%11125.72.71
Univ D13.77%70.90%4.80%0.33%73.40%10423.32.52
Univ E12.97%76.91%10.40%0.54%57.80%14840.74.14
Univ F22.79%72.08%5.80%0.29%59.00%17828.22.81
Univ G6.41%68.13%2.60%0.11%75.00%5817.91.98
Univ H8.92%69.39%4.90%0.00%47.10%8829.22.55
Median12.70%70.90%5.10%0.34%58.40%97272.63
DomesticUniv I2.83%61.61%2.70%0.00%58.40%5315.91.84
Univ J8.38%55.39%1.20%0.00%46.40%5612.61.35
Univ K10.68%50.51%0.70%0.03%37.10%579.41.1
Univ L14.93%53.26%0.50%0.04%30.40%559.21.08
Univ M1.82%54.60%1.10%0.00%32.90%3114.61.33
Univ N4.91%55.69%1.50%0.00%37.10%519.21.21
Univ O3.66%57.89%0.00%0.00%33.50%115.61.12
Median4.91%55.39%1.10%0.00%37.10%539.41.21
Total Median8.92%68.13%2.70%0.04%47.10%5717.91.98
SHTech-A14.92%71.55%2.20%0.00%72.40%239.41.64
SHTech-B15.15%71.00%3.00%0.00%49.10%157.91.68
SHTech-C10.00%63.63%10.00%0.00%45.00%710.62.6
Difference between ShanghaiTech and benchmarking institutions on research topics in each discipline

A content analysis was conducted to further explore the differences in research topics, using co-occurrence analysis of keywords. Here the effort was made to compare ShanghaiTech with the strongest in the chosen field. Analyses were performed resulting in a research topic map for ShanghaiTech, and another for the institution with highest CNCI in the discipline. In the WoS data, Keywords Plus extracted from titles of cited publications and footnotes provide supplementary terms to the Author’s Keywords (Chen, 2017; Garfield, 1990). Based on keywords and using the full counting method maps were generated by VOSviewer (version 1.6.10).

Univ E was selected as the leading university in the area of molecular biology & genetics. Its keyword co-occurrence map led to six main clusters (indicated with colored nodes and links, Figure 2a). The themes of these clusters are stem cell and genes (blue cluster, main keywords: gene expression, DNA methylation, human genome, embryonic stem-cells), genetic mutation (green cluster, main keywords: cell lung cancer, T cell, somatic mutations, copy-number alteration), cancer therapy mechanism (pink cluster, main keywords: cancer, stem cell, drug delivery), protein (sky blue cluster, main keywords: protein, saccharomyces-cerevisiae, crystal structure), genome research (red cluster, main keywords: cardiovascular disease, susceptibility loci, genetic architecture), medical research (yellow cluster, main keywords: human brain, Alzheimer disease, skeletal muscle). The main co-occurring (linked) keywords of each cluster indicate the main research topics in each theme.

Figure 2

Maps of keywords of molecular biology & genetics of Univ E (left) and ShanghaiTech (right); mapped parameter: co-occurrence; unit of analysis: all keywords; threshold of (a) = 10, threshold of (b) = 2, full counting.

In the corresponding map for ShanghaiTech, there were also six major thematic clusters, resulting from the following principal keywords: crystal structure, embryonic stem cell, gene expression, cell proliferation, cell lung cancer and hippo pathway. These six clusters have a strong linkage correlation (Figure 2b). Compared to Univ E, it seems that ShanghaiTech has narrower research coverage and the linkage connections are sparser, which might be one of the reasons of relatively lower citations and impact.

The distributions themselves should be compared cautiously. Each institution has its selected focuses in research topics so the maps will naturally differ. Moreover, in itself such a map does not provide information on the quality, impact or leading capacity. Hence, further information, such as citation impact or median year of publication of each subject, may need to be added or superimposed to the respective maps for a thorough comparison. However, looking at the overlaps or at the gaps in topic distribution serves as a first step to guide the attention onto the actual research subjects and helps identifying the strengths or uniqueness of an institution. It further alerts to missing trends or topics, and indicates for which topics new researchers are needed to fill critical gaps.

Another method to reveal the development of subjects is illustrated in Table 4. The most frequent keywords in the field of molecular biology & genetics from all benchmarking universities are listed, together with the year of first appearance during the period 2014–2018 for each word by any of the institutions, indicated by √. If, during the period 2014–2018 ShanghaiTech papers used this keyword for the first time as benchmarking universities this is indicated by grey background. It was found that for a few keywords, ShanghaiTech published papers almost at the same time with other institutions, but ShanghaiTech papers dealt with most keywords later than others. However, the time range of the data, starting at 2014, complicated the explanation because we did not know if there were occurrences earlier than 2014 for those keywords ShanghaiTech papers claimed to use concurrently with others. A much longer time range is needed.

The most frequent keywords of Molecular Biology & Genetics of benchmarking institutions and year of first appearance in ShanghaiTech’s papers.

Frequent Keywords20142015201620172018
1alzheimer disease
2cancer cells
3cell cycle
4cell lung cancer
5cell proliferation
6colorectal-cancer
7crystal structures
8dna methylation
9embryonic stem-cells
10epithelial-mesenchymal transition
11escherichia coli
12gene expression
13human genome
14inflammation
15lung cancer
16mammalian-cells
17mesenchymal stem cells
18molecular mechanism
19mouse model
20nf-kappa-b
21oxidative stress
22pluripotent stem-cells
23progenitor cells
24saccharomyces-cerevisiae
25signaling pathway
26skeletal muscle
27stem cell
28susceptibility loci
29tumor growth
30tumor suppression
Degree of leading research: comparing ShanghaiTech and benchmarking institutions

Analyzing the contribution to research fronts can distinguish whether an institution is leading or following. A Research Front (in short RF) is a cluster of co-cited core papers and the group of current papers citing those papers. Based on the core papers associated with a RF provided in the Essential Science Indicator (ESI) database, the publication data of each university were mapped (by WoS ID). A total of 9,662

research fronts and corresponding core papers were retrieved. Then the citation counts of those core papers of each institution were obtained through WoS. ESI provides the average year of the core papers in its clusters for each RF, and to some extent it can be assumed that the earlier papers published in each RF cluster, the more likely they were leading the research topics. Based on this, the year of publication of core papers in a RF of each university was compared with average publication year of the topic in RF; we recorded if ShanghaiTech papers were published earlier/no later than the average publication year of the RF topics.

Generally, international leading universities had better performance than domestic universities. Although ShanghaiTech and Univ O had a few papers in the RF, and published some papers not later than the average publication year (Table 5), there were no papers that were published earlier than the average publication year, which shows that these two institutions are not leading in this RF.

Number of papers of each institution as core papers in the research front.

No later than average publicationEarlier than average publication
Papers in Research Frontyearyear
PapersPercentagePapersPercentage
Univ A38218047.12%4210.99%
Univ B2218538.46%156.79%
Univ C57031955.96%7713.51%
Univ D39919248.12%4010.03%
Univ E71336451.05%659.12%
Univ F92147051.03%869.34%
Univ G26012347.31%228.46%
Univ H43623453.67%5312.16%
Univ I43422150.92%4710.83%
Univ J27514853.82%238.36%
Univ K21310448.83%219.86%
Univ L1346145.52%107.46%
Univ M25614255.47%2610.16%
Univ N27213148.16%279.93%
Univ O27622.22%00.00%
SHTech-A321340.63%00.00%
Research performance of the internal units in ShanghaiTech

In order to assess the research performance of internal units of ShanghaiTech (e.g. school or Principal Investigator (PI)), a graph using the number of publication, times cited, and CNCI of each PI of ShanghaiTech was obtained. The data is presented in Figure 3 with the X axis representing the number of publications for each PI; the Y axis shows the total number of citations, while node radii refer to CNCI values. Nodes, representing PIs, of different colors indicate the schools of these PIs. This figure immediately benchmarks productivity and impact for each PI of each internal unit of ShanghaiTech, with those at the right-upper corner publishing more and gaining more citations. Those at the left-lower corner published fewer and received fewer citations. Some PIs published only a few highly cited papers while other ones published more with a lower number of citations. Admittedly, this might be an overly simplified description of research productivity, but further analysis based on it may provide interesting insights.

Figure 3

Map of PI’s research productivity and competitiveness of ShanghaiTech.

Discussion

This study is a preliminary exploration for a mission-oriented benchmarking effort. Different from the common ranking methods, our approach aims to focus on research impact and competitiveness by comparing with top research universities. We do this on the university level, then on the level of disciplines, and further on the level of topics. The result was reported to the faculty and accepted by the university as one of the analytic angles in its self-assessment, complementary to international expert reviews, utilized as the main evaluation approach. Improved studies using the same approach are commissioned by the university, at a yearly interval. The basic approach and methodology can also be adapted for other universities.

Clarifying the principles and assessment questions is crucial, as different indicators and analytic methods must be selected accordingly. Traditional composite index based assessment is usually biased toward comprehensive institutions with large number of researchers and subject coverage (Borgman, 2015; Lopez-Illescas et al., 2009; Waltman & Eck, 2013), but our effort takes the nature and purpose of each institution into consideration and aims to design individual assessment schemes. Selection of benchmarking targets is also in line with this mission-oriented approach in the case of ShanghaiTech.

Multiple indicators are used, trying to provide multiple angles and to avoid bias caused by excessive use of a single-categorized indicator. We recall that Rousseau et al. (2018) mentioned that citation measurement may indicate but is not equal to research quality. Even though citation impact indicators are widely accepted, they can be influenced and biased by many factors, such as language and document type. Researchers and whoever uses indicators in assessments must be aware of the consequence of misusing indicators (Rousseau et al., 2018).

Faculty make-up, in the case of ShanghaiTech with a large number of adjunct faculty, needs to be considered when comparing institutions with obvious differences. For this reason we defined multiple datasets according to employee status to enable revealing their performance separately. However, we were unable to distinguish the detailed categories of faculty at the other benchmarking universities. We would argue that for these more established and bigger institutions adjunct faculty may not play a similar role as in ShanghaiTech. Moreover, comparing to a bigger pool of researchers in the target institutions may not be as bad because it works against over-estimating one’s own performance.

As a mission-oriented model, future improvements are needed to provide better diagnostic power and to help research planning. Top institutions in common composite rankings may not be the real leading ones in specific research areas ShanghaiTech focuses on or plans to develop. Finding those niche leaders for research-area-based benchmarking may provide a better understanding of one’s standing in the field. Using the disciplinary categories by ESI is a choice of convenience only. Constructing disciplinary categories according to individual institution’s research focuses would produce more meaningful and diagnostically more helpful results, but with obvious challenges. Other quality or impact indicators may also be needed to avoid influences of journal-centric ones. Topic distribution maps as they are now in the study are yet unable to reveal impacts, degrees of leading, and evolutions. Limiting to research articles and to a certain database creates its own intrinsic problems, especially for institutions with a strong engineering research focus and corresponding investment like ShanghaiTech. We understand that any analytic result like the one presented here is only a beginning for further explorations. Consequently, new techniques and tools are needed to integrate original data, results. Such tools are needed to drill down, expand, connect, fuse, or otherwise analyze data, leading to reports that are read by researchers or decision-makers to explore new questions stimulated by the results. The authors are planning further improvements in the second and future phases of benchmarking.

eISSN:
2543-683X
Język:
Angielski
Częstotliwość wydawania:
4 razy w roku
Dziedziny czasopisma:
Computer Sciences, Information Technology, Project Management, Databases and Data Mining