1. bookVolume 5 (2020): Issue 3 (August 2020)
Journal Details
License
Format
Journal
eISSN
2543-683X
First Published
30 Mar 2017
Publication timeframe
4 times per year
Languages
English
access type Open Access

Evidence-based Nomenclature and Taxonomy of Research Impact Indicators

Published Online: 03 Jul 2020
Volume & Issue: Volume 5 (2020) - Issue 3 (August 2020)
Page range: 33 - 56
Received: 28 Jan 2020
Accepted: 20 May 2020
Journal Details
License
Format
Journal
eISSN
2543-683X
First Published
30 Mar 2017
Publication timeframe
4 times per year
Languages
English
AbstractPurpose

This study aims to classify research impact indicators based on their characteristics and scope. A concept of evidence-based nomenclature of research impact (RI) indicator has been introduced for generalization and transformation of scope.

Design/methodology/approch

Literature was collected related to the research impact assessment. It was categorized in conceptual and applied case studies. One hundred and nineteen indicators were selected to prepare classification and nomenclature. The nomenclature was developed based on the principle—“every indicator is a contextual-function to explain the impact”. Every indicator was disintegrated into three parts, i.e. Function, Domain, and Target Areas.

Findings

The main functions of research impact indicators express improvement (63%), recognition (23%), and creation/development (14%). The focus of research impact indicators in literature is more towards the academic domain (59%) whereas the environment/sustainability domain is least considered (4%). As a result, research impact related to the research aspects is felt the most (29%). Other target areas include system and services, methods and procedures, networking, planning, policy development, economic aspects and commercialisation, etc.

Research limitations

This research applied to 119 research impact indicators. However, the inclusion of additional indicators may change the result.

Practical implications

The plausible effect of nomenclature is a better organization of indicators with appropriate tags of functions, domains, and target areas. This approach also provides a framework of indicator generalization and transformation. Therefore, similar indicators can be applied in other fields and target areas with modifications.

Originality/value

The development of nomenclature for research impact indicators is a novel approach in scientometrics. It is developed on the same line as presented in other scientific disciplines, where fundamental objects need to classify on common standards such as biology and chemistry.

Keywords

Introduction

Research Impact (RI) is a broad topic of scientometrics to support the progress of science and monitoring the influence of efforts made by the government, institutions, societies, programs, and individual researchers. There are several documented and popular RI assessment methods developed by individuals and organisations for evaluating the research of a particular programme or general-purpose. This intent has created the diversity in evaluation methods, frameworks and scope. Some approaches focus only on the impacts related to academic recognition and use, such as Bibliometric Measures. However, the growing technology, educational networking, effective and targeted research strategies, and regular monitoring of RI are reducing the gap between the research producers and consumers. As a result, the horizon of RI is expanding and covering other areas of impacts such as on economy, society, and environment.

Many individuals and organizations have introduced measures and indicators for assessing the RI. Nevertheless, due to diversity in nature and scale of RI, not a single method is considered robust and complete (Vinkler, 2010). Therefore, new measures and indicators are being introduced on time to time basis according to the interest and availability of resources of the method designers (Canadian Academy of Health Sciences, 2009). Additionally, higher availability of national and international funding for health sciences is critically influencing the science of RI assessment (Heller & de Melo-Martín, 2009). It means there are more indicators, measures, and frameworks for health-related research than any other areas of science. Resultantly, there is a considerable gap available for generalizability and transformability of health-related efforts to rest of the science.

At large, this study aims to discover the evidence-based diversity of RI indicators and to develop a method. In this regard, Nomenclature of RI indicators is developed based on divide and rule principle to achieve the objective. Additionally, taxonomical analysis is presented based on the primary components of nomenclature. This effort is a step forward to develop a robust and inclusive RI assessment method. The concept of this paper was initially presented in the 17th International Conference on Scientometrics and Informetrics (ISSI 2019) in the form of a poster (Arsalan, Mubin, & Al Mahmud, 2019).

Review of literature

As per the broader understanding, metrics and indicators are different. It is clear from the semantics as “indicator” indicates the research impact, whereas metrics are the measurement of research impact. According to P. Vinkler (2010), an indicator should be read as a meaningful representative, which indicate the performance of a system as per its design objective. Metrics, on the other side, provide additional quantitative information about the impact of a system (Lewison, 2003). However, diversity of effects can create a problematic situation to measure all aspects quantitatively. Therefore, pragmatically indicators have the potential to illustrate useful and broader impacts as compared to the metrics (Vinkler, 2010).

There are multiple reasons for diversity in research impact indicators. For instance, Bennett et al. (2016) explained 10-points criteria for making research impact indicators from a technical and contextual point of view. In this regard, indicators should be specific, validated, reliable, comparable, substantial, accessible, acceptable, appropriate, useable, and feasible. Consequently, diversity in criteria required range of indicators to fulfil the conditions in diversified context. Canadian Academy of Health Sciences (2009) explained another reason for this diversity, i.e. strategy of indicator selection. This strategy describes three basic principles.

The indicator should answer the specific question of evaluation

The indicator should satisfy the level of aggregation

The indicator should be read with other indicators to complement the strength of evaluation

In other words, every indicator only explains the impact of research in minimal dimension, covers a very specific level of aggregation, and has a very limited power of defining the research impact (REF, 2012). As a result, we need a bundle of indicators that fulfil the strategic requirements of evaluators.

“Nomenclature” is a combination of two Latin words viz. “nomen” means “name” and “calare” means “to call”. It is a scientific process in any discipline to assign the names to essential components according to the predefined rules and standards (Hayat, 2014). Generally, these rules are outlined in the form of a classification scheme. Therefore, for nomenclature, the classification system is highly significant. Longabaugh et al. (1983) introduced the problem-focused nomenclature in medical science, which is a coding system with a specific objective. They argued that the problem-focused approach provides better control for organizing and problem management. The similar concept can be applied in any branch of science to organize the objects concerning a problem-focused classification system.

Classification and organization of research impact indicators are not new (Vinkler, 2010). However, the nomenclature or taxonomy approach is missing. Therefore, standardization is globally missing. Every effort of research impact assessment distinctly organized the indicators according to the technical and contextual requirements. Nonetheless, based on the context, the classification scheme of indicators can be arranged in four groups.

Impact Categories and Domains

Impact Time and Pathways

Impact in Specific Dimension

Uncategorised

In many research impact assessment methods, the adopted organization of impact indicators is based on impact categories and domains. These methods are wide-scope and open to select indicators in any of their classes (Bernstein et al., 2006). Payback framework for assessing the impact of health research is one of the classical methods falls under this group, for instance (Buxton & Hanney, 1996). It was developed by the Health Economics Research Group at Brunel University in 1996 by Buxton and Hanney (1996). It organizes the indicators in multi-dimensional categories including knowledge, research benefits, political, and administrative benefits, health sector benefits, and broader economic benefits.

The second group that follows the impact time and pathways are based on the concept of output and outcome. The understanding of the difference between output and outcome was first explained by United Way of America (1996) in the form of logic modelling. This model explicitly defines inputs, process and outputs in the form of resources, activities and products, respectively. Whereas, the outcome is a benefit to the population of interest. Weiss (2007) split the outcomes of health research into initial, intermediate, and long-term impacts. This time-bound approach represents a sequence or a chain of effects. For instance, awareness of new research in decision-making community is an initial outcome. That awareness can lead to a change in clinical practice as an intermediate outcome. Ultimately, the long-term outcome is the improvement in the health of patients.

The third approach is exclusive. Many organizations and individual researchers are keen to know the impact of research only in one area in depth. One example is the monetary value approach presented by Deloitte Access Economics (2011). In this approach, all indicators and measures are solely related to the economic impacts of research. Some other methods are organization-specific where a scoring system is limited in scope and developed in a local context. We cannot fit them in any above mentioned organized structure, for instance, The Wellcome Trust's Assessment Framework (Wellcome Trust, 2009), Matrix Scoring System (Wiegers et al., 2015), and Royal Netherlands Academy of Arts and Sciences Approach (VSNU, KNAW, & NWO, 2009).

Although the organizations of indicators within a research impact framework has been a mandatory part of every evaluation method, there is still a need to organize the indicators based on criteria and rules. A classic example of diversity and heterogeneity can be seen in REF (2012) where more than 100 indicators are applied based on subject domains and target areas of socio-economic interest. There is still a need to adopt a mechanism where these indicators can be generalized and transformed on taxonomical structures.

Method
Collection of research impact indicators

We systematically explored the literature databases, including Scopus, WebMD, ACM DL, IEEE Xplore, Web of Science and Google Scholar to collect research articles providing RI assessment indicators and methods. In many cases, organizations published the frameworks and guidelines in the form of technical reports; therefore, grey literature was also considered.

Multiple combinations of literature-searching keywords were used with their synonyms. These include but not limited to the “research impact”, “research productivity”, “research quality”, “research impact indicators”, “research impact assessment”, “research impact assessment method”, “research impact assessment framework”, “scientometric indicators”, “bibliometric indicators”, “economic indicators”, “social indicators”, and “environmental indicators”. The purpose of using a combination of these keywords was to identify theoretical or applied studies related to the research impact assessment. In theoretical or conceptual studies, we found the constructs and mechanisms of research impact assessment methods applied studies provided the demonstration of assessment methods in the form of case studies. We also found some review articles, which provided a comparison of different RI assessment approaches. However, in this study, we mainly focused on the preparation of RI indicators. Due to using multiple combinations of keywords and databases, we found the significant repetition of the same studies, which we removed with the help of EndNote software. In this study, we extracted indicators from conceptual studies. We used NVivo 12 software for annotation and coding. For deciphering the nomenclature, indicators were disintegrated based on their lexical and conceptual structures as discussed in the Results section. For improving the result of coding, inter-coder reliability was applied on 10% of data and conflicts were resolved with the help of discussion.

The base of the cognitive structure of defined nomenclature in this study is the “every indicator is a contextual-function to explain the impact”. The primary constructs of an indicator are function and context. Function refers to the “correspondence”, “dependence relation”, “rule”, “operation”, “formula” or “representation” as defined by Vinner and Dreyfus (1989). It explains the relationship between the two domains “research” and “impact”. In other words, impact (y) is a function of research (x), i.e. y=f(x). At large, in scientometrics understanding, the functional operation can be “improvement”, “recognition”, “reduction”, “replacement” etc. (see Table 1 for examples). The indicator is a subjective measure of a system-dependent phenomenon which is always described in its contextual understanding by a system designer (Vinkler, 2010). Therefore, the indicator's function is always applied in a specific context. For instance, “improvement in patient care system”, in this indicator, the patient care system represents the context of the healthcare system, and it is critically important for researchers, funders, institutes and support organisations related to the health sciences (Trochim et al., 2011).

Structure of Indicator (I) = F + C

Where, F = Function, and C = Context

Whereas C = t + d

Where, t = target area, and d = impact domain

Nomenclature of Indicator with Examples.

Functions (F)
Improvement / Addition / ReductionThis function of indicator explains the addition or enhancement of an existing phenomenon in quantitative or qualitative form. (Example: Improvement in economic gains such as increased employment, health cost cut (Weiss, 2007))
CreationThis function of indicator focuses on the creativity in the form of the development of new knowledge, theory, technique, method, technology, approach, opportunity or any workflow. (Example: Creation of prevention methods for clinical practice (Trochim et al., 2011))
RecognitionThis function explains the recognition of effort in the form of outstanding quality by the peers or experts such as in the form of awards, promotions, meritorious selection and work showcasing etc. This recognition can be of the research, the researcher or the research institute. (Example: Receiving an award on research (Kuruvilla et al., 2006))
Obsoleting / ReplacingThis function elaborates the policy, law, regulation to obsolete or disuse the existing phenomena to overcome the future negative impacts. (Example: Change in law to obsolete the existing method of drug approval (Maliha, 2018))
Context (C)
Target (t)Contextual targets in research impact science include knowledge, service, policy, law, guideline, system, technology, procedure, method, framework, workflow, publication, patent, product, stakeholder, citation, literature gaps, intellectual challenges, scholarly issues, relationships, collaborations, and networks etc. These are the key areas but usually partial in contextual understanding.
Domain (d)The contextual domain is the main area or field of interest of the indicator system designer such as health, education, economy, environment, academia, medical science, chemistry, history, multidisciplinary etc. The main body of knowledge and elaboration of indicators are always from the domain language. The domain is the main component of the indicator, which specialised the context and application of the indicator. However, the level of the domain is subject to the interest and perspective of impact evaluator.
Results and discussion
Search outcome and identification of indicators

The result of the literature search is more than one thousand studies (1,152), where research impact was published in the form of theoretical papers, case studies and review articles. However, after excluding studies where research impact was assessed in case-studies by using any method developed by elsewhere, only 36 conceptual studies were left. In the conceptual studies, we found out more than 500 research impact indicators. For this study, we selected 119 indicators for preparing the nomenclature (see Appendix 1).

Nomenclature

In many cases, an indicator is self-explanatory and well written in a proper construct-based structure such as Development of mitigation methods for reducing environmental hazards and losses from natural disasters (Grant et al., 2010). However, similar to an algebraic expression, sometimes constructs are obscured but well understood by the users. For instance, in Number of citations, where, Function and the contextual domain is missing but well recognized as an Increased number of bibliometric citations, where, Function is the addition, the contextual target is citations, and the domain is bibliometrics.

This contextual nomenclature of indicators allows focussing on context and function irrespective of the selection of the words and lexical structure of the indicator. Additionally, it strengthens the idea of contextual generalisability, which is very helpful in extending the applications and scope of the indicators. For example, in use of research in the development of medical technology (where, Function = development/creation, Contextual Target = Technology, and Contextual Domain = Healthcare). This indicator can be generalised on the variable domain such as use of research in the development of technology (where, Function = development/creation, Contextual Target = Technology, and Contextual Domain = variable [generalized]).

Taxonomical analysis

In analyzed indicators, most of the indicators are functionally related to the improvements in the current state of affairs (63%), mainly focused on future research, services and methods (Figure 1). However, recognition of research (23%) in the form of bibliometric, rewards and other citations is also considerably highlighted in the literature-based list of indicators. Creativity and development (14%) are also the prevailing influence of research, which is reflected in indicators mentioning the creation of new knowledge, technique, research teams, drugs etc. More than half (59%) of the indicators attempt to explore the impact in the academic domain (Figure 2), e.g. Where and how the research is recognised? What knowledge, methods and collaborations are formed? What challenges, issues and gaps are addressed? Knowledge domains related to the social systems and services are second in coverage (26%) that primarily focus on the healthcare, education and justice systems. Economic policies and services also have a good share (11%) in literature-based indicators. Although, during the last two decades, the impact of research on improving the environment and sustainability has also emerged in various indicators, its representation is quite low.

Figure 1

Evidence-based taxonomical characteristics of indicators, (A) Scale of indicators, (B) Complexity of indicators, (C) Functions of indicators, (D) Domains of indicators, and (E) Target areas of indicators.

Figure 2

Cross-constructs distribution of indicators characteristics, (A) Functional distribution of target areas in indicators, (B) Domain distribution of target areas in indicators, and (C) Functional distribution of domains in indicators.

Limitations of the study

In this study, 119 indicators were interpreted and coded for nomenclature and taxonomy. However, the inclusion of more indicators may change the results of classification. Another aspect which may affect the outcome of the study is consistency in interpretation and coding of indicators. Although it was improved by using the intercoder reliability method on 10% indicators, rule-based text mining techniques may improve the results.

Conclusion and future direction

The study categorized the research impact indicators based on their characteristics and scope. Furthermore, a concept of evidence-based nomenclature of research impact indicator has been introduced to generalize and transform the indicators. For building nomenclature and classification, one hundred and nineteen indicators were selected and coded in NVivo software. The nomenclature was developed based on the principle “every indicator is a contextual-function to explain the impact”. Every indicator was disintegrated in three parts (essential ingredients of nomenclature), i.e. Function, Domain, and Target Areas. It is observed that in literature, the primary functions of research impact indicators are improvement, recognition and creation/development. The focus of research impact indicators in literature is more towards the academic domain, whereas the environment/sustainability domain is least considered. As a result, research impact related to the research aspects is considered the most. Other target areas include system and services, methods and procedures, networking, planning, policy development, economic aspects and commercialisation etc.

The study provided a novel approach in scientometrics for generalizability and transformability of research impact indicators. It explored the diversity of indicators and demonstrated the generalization based on fundamental constructs, i.e. function, domain and target area. As a result, a research impact indicator can be modified and applied to multiple research disciplines.

Figure 1

Evidence-based taxonomical characteristics of indicators, (A) Scale of indicators, (B) Complexity of indicators, (C) Functions of indicators, (D) Domains of indicators, and (E) Target areas of indicators.
Evidence-based taxonomical characteristics of indicators, (A) Scale of indicators, (B) Complexity of indicators, (C) Functions of indicators, (D) Domains of indicators, and (E) Target areas of indicators.

Figure 2

Cross-constructs distribution of indicators characteristics, (A) Functional distribution of target areas in indicators, (B) Domain distribution of target areas in indicators, and (C) Functional distribution of domains in indicators.
Cross-constructs distribution of indicators characteristics, (A) Functional distribution of target areas in indicators, (B) Domain distribution of target areas in indicators, and (C) Functional distribution of domains in indicators.

Nomenclature of Indicator with Examples.

Functions (F)
Improvement / Addition / ReductionThis function of indicator explains the addition or enhancement of an existing phenomenon in quantitative or qualitative form. (Example: Improvement in economic gains such as increased employment, health cost cut (Weiss, 2007))
CreationThis function of indicator focuses on the creativity in the form of the development of new knowledge, theory, technique, method, technology, approach, opportunity or any workflow. (Example: Creation of prevention methods for clinical practice (Trochim et al., 2011))
RecognitionThis function explains the recognition of effort in the form of outstanding quality by the peers or experts such as in the form of awards, promotions, meritorious selection and work showcasing etc. This recognition can be of the research, the researcher or the research institute. (Example: Receiving an award on research (Kuruvilla et al., 2006))
Obsoleting / ReplacingThis function elaborates the policy, law, regulation to obsolete or disuse the existing phenomena to overcome the future negative impacts. (Example: Change in law to obsolete the existing method of drug approval (Maliha, 2018))
Context (C)
Target (t)Contextual targets in research impact science include knowledge, service, policy, law, guideline, system, technology, procedure, method, framework, workflow, publication, patent, product, stakeholder, citation, literature gaps, intellectual challenges, scholarly issues, relationships, collaborations, and networks etc. These are the key areas but usually partial in contextual understanding.
Domain (d)The contextual domain is the main area or field of interest of the indicator system designer such as health, education, economy, environment, academia, medical science, chemistry, history, multidisciplinary etc. The main body of knowledge and elaboration of indicators are always from the domain language. The domain is the main component of the indicator, which specialised the context and application of the indicator. However, the level of the domain is subject to the interest and perspective of impact evaluator.

j.jdis-2020-0018.apptab.001.w2aab3b7c20b1b6b1ab2b2ab2Aa

RefIndicatorReferenceMeasureQualitative/Quantitative
1Identification of research gaps, questions and new research dimension(Heller & de Melo-Martín, 2009; Kuruvilla, Mays, Pleasant, & Walt, 2006; W. M. Trochim, Marcus, Masse, Moser, & Weld, 2008; Weiss, 2007)Yes / NoŦQualitative
2Development of a new technique for data collection and new data(Heller & de Melo-Martín, 2009; Sung et al., 2003)Yes / NoŦQualitative
3Creation of a research method or extension of existing by involving new approach and technique(Kuruvilla et al., 2006; W. M. Trochim et al., 2008)Yes / NoŦQualitative
4Defining the concept and subject vocabulary in a more comprehensive way(Mankoff, Brander, Ferrone, & Marincola, 2004; W. M. Trochim et al., 2008)Yes / NoŦQualitative
5Formation of research groups and collaborate in multidimensional research(S. R. Hanney, Grant, Wooding, & Buxton, 2004; Heller & de Melo-Martín, 2009; Kuruvilla et al., 2006; W. M. Trochim et al., 2008)Yes / NoŦQualitative
6Recruitment of skilled researchers../../../../Google Drive/Working/001 PhD Work/00103 Paper 1/Drafting/Indicators v2.xlsx - RANGE!_ENREF_68 (Heller & de Melo-Martín, 2009)How many researchers are recruited?Quantitative
7Development of communities of science, new grant programmes; replication and new research(S. Hanney, Buxton, Green, Coulson, & Raftery, 2007)Yes / NoŦŦMixed
8Effective planning and addressing future research../../../../Google Drive/Working/001 PhD Work/00103 Paper 1/Drafting/Indicators v2.xlsx - RANGE!_ENREF_51 (Gordon & Meadows, 1981)Yes / NoŦQualitative
9Research capacity building for an individual or a group of researchers(Buxton & Hanney, 1996; Raftery, Hanney, Greenhalgh, Glover, & Blatch-Jones, 2016)How many researchers are trained?Quantitative
10Preparing a better procedure for researchers induction(Heller & de Melo-Martín, 2009; Sung et al., 2003)Yes / NoŦQualitative
11Improvement in ethical approval processes for better decisions and timeliness(Pober, Neuhauser, & Pober, 2001; Sung et al., 2003)Yes / NoŦQualitative
12Formation of new research teams and projects(Pober et al., 2001)How many projects and teams are established?Quantitative
13Successful completion of ongoing research with the achievement of set targets(Weiss, 2007)Yes / NoŦŦMixed
14Retention of research team by involving in productivity and future research(Heller & de Melo-Martín, 2009; Kuruvilla et al., 2006; Nathan, 2002)How many members are retained?Quantitative
15Advancement in numbers and quality of research and research teams(Nathan, 2002; Pober et al., 2001; W. M. Trochim et al., 2008; Weiss, 2007)Yes / NoŦŦMixed
16Enhancement of research process, behaviour and procedural protocols(Heller & de Melo-Martín, 2009; Pober et al., 2001; Sung et al., 2003)Yes / NoŦQualitative
17Recognition and leadership of researchers in the research domain(Kuruvilla et al., 2006; Pober et al., 2001)Yes / NoŦQualitative
18Improvement of research communication between researchers and research organizations(Heller & de Melo-Martín, 2009; Mankoff et al., 2004)Yes / NoŦQualitative
19Serving of research staff on a higher level in more advanced organizations at national and international level(Kuruvilla et al., 2006; Sung et al., 2003)Yes / NoŦQualitative
20Improvement in research culture and overall environment(Heller & de Melo-Martín, 2009; Kessler & Glasgow, 2011; Mankoff et al., 2004; Pober et al., 2001; Sung et al., 2003)Yes / NoŦQualitative
21Identification and overcoming of the research process constraints(Heller & de Melo-Martín, 2009; Pober et al., 2001)Yes / NoŦQualitative
22Improved willingness and tangible measures for practice-based and applied research(Westfall, Mold, & Fagnan, 2007)Yes / NoŦQualitative
23Development of improved analytical methods for existing data(Kessler & Glasgow, 2011; Kuruvilla et al., 2006; W. M. Trochim et al., 2008; Weiss, 2007)Yes / NoŦQualitative
24Improvement in multi-disciplinary research methods(Kuruvilla et al., 2006)Yes / NoŦQualitative
25Creation of methods for cross domains results in interpretation and synthesis(Kuruvilla et al., 2006; Pang et al., 2003)Yes / NoŦQualitative
26Embracing the innovative methods for measuring the research outcome(Dougherty & Conway, 2008; W. M. Trochim et al., 2008)Yes / NoŦQualitative
27Discovery of new or advanced research findings(Lavis, Ross, McLeod, & Gildiner, 2003; Mankoff et al., 2004)Yes / NoŦQualitative
28Discovery of novel knowledge or innovative techniques(S. R. Hanney et al., 2004; Kalucy, Jackson-Bowers, McIntyre, & Reed, 2009; Lavis et al., 2003; W. Trochim, Kane, Graham, & Pincus, 2011)Yes / NoŦQualitative
29Demonstration of an efficient way of treatment(Lavis et al., 2003; W. Trochim et al., 2011; Woolf, 2008)Yes / NoŦQualitative
30Development of new research devices or products for better results(ARC, 2018; Kalucy et al., 2009; Lavis et al., 2003; Mankoff et al., 2004; Pang et al., 2003)Yes / NoŦŦMixed
31Obtaining patents for new devices or products(ARC, 2018; Kuruvilla et al., 2006; Lavis et al., 2003; Lewison, 2003; Sarli, Dubinsky, & Holmes, 2010)How many patents are obtained?Quantitative
32Identification or validation of new biomarkers for better healthcare(Lavis et al., 2003; Zerhouni, 2007)Yes / NoŦQualitative
33Use of research outcomes and discoveries into the advancement of research related to animals and humans(Pober et al., 2001; Woolf, 2008; Zerhouni, 2007)Yes / NoŦQualitative
34Receiving an award on research(Kuruvilla et al., 2006)How many awards are received?Quantitative
35The increment in number and proportion of research grant submissions and awards(ARC, 2018; Lavis et al., 2003; Lewison, 2003; Weiss, 2007)What is the proportion of success of grant award?Quantitative
36Increase in the quantity of publications in high ranking journals as a research outcome(Buxton & Hanney, 1996; Kuruvilla et al., 2006; Lewison, 2003; Pang et al., 2003; Weiss, 2007)How many publications are produced in high ranking journals? ŦŦŦQuantitative
37Increase in the total impact factor gained by publishing research in high ranking journals(ARC, 2018; Archambault & Lariviere, 2009; RAND Europe, 2006; Weiss, 2007)How much impact factor is gained? ŦŦŦQuantitative
38Increase in the conference papers and presentations organized on national or international levels.(ARC, 2018; Kalucy et al., 2009; Lewison, 2003)How many conference papers and presentations are produced? ŦŦŦQuantitative
39Increase in the number of citations of research outcome(ARC, 2018; Garfield, 2006; S. R. Hanney et al., 2004; Kuruvilla et al., 2006; RAND Europe, 2006; Weiss, 2007)How many citations are obtained? ŦŦŦQuantitative
40Increase in media appearance of researchers or research organizations for their findings and its relation to the public(Kuruvilla et al., 2006; Lewison, 2003)How many times appeared in media?Quantitative
41Popularity and acceptance of research-based knowledge and techniques in masses (e.g. change in community-based health practice or education system)(Kalucy et al., 2009; Kuruvilla et al., 2006; Lewison, 2003; Pang et al., 2003; Weiss, 2007)Yes / NoŦQualitative
42Participation of researchers as a member of the research journal editorial board or become a journal editor(Kuruvilla et al., 2006)Yes / NoŦQualitative
43Dissemination and reach of research outcome to more audiences(Kalucy et al., 2009; Kuruvilla et al., 2006; Weiss, 2007)Yes / NoŦQualitative
44IF2-Index(Boell & Wilson, 2010)Index Value ŦŦŦQuantitative
45h-Index(Hirsch, 2005)Index Value ŦŦŦQuantitative
46Contemporary h-Index(Sidiropoulos, Katsaros, & Manolopoulos, 2007)Index Value ŦŦŦQuantitative
47Individual h-Index(Harzing, 2010)Index Value ŦŦŦQuantitative
48Hi-Index(Zhai, Yan, & Zhu, 2013)Index Value ŦŦŦQuantitative
49H2-Index(Vanclay & Bornmann, 2012)Index Value ŦŦŦQuantitative
50M-Quotient(Hirsch, 2005)Index Value ŦŦŦQuantitative
51G-Index(Egghe, 2006)Index Value ŦŦŦQuantitative
52Y-Index(Fu & Ho, 2014)Index Value ŦŦŦQuantitative
53PRP-Index(Vinkler, 2014)Index Value ŦŦŦQuantitative
54IFQ2A index(Torres-Salinas, Moreno-Torres, Delgado-López-Cózar, & Herrera, 2011)Index Value ŦŦŦQuantitative
55DCI-Index(Järvelin & Persson, 2008)Index Value ŦŦŦQuantitative
56R-& AR-Indices(Jin, Liang, Rousseau, & Egghe, 2007)Index Value ŦŦŦQuantitative
57AHP Index(Wang, Wen, & Liu, 2016)Index Value ŦŦŦQuantitative
58Altmetric(A. E. Williams, 2017)Altmetric Attention ScoreQuantitative
59STAR Metrics(Largent & Lane, 2012)Index ValueQuantitative
60ResearchGate-Score(Hoffmann, Lutz, & Meckel, 2016)Index Value ŦŦŦQuantitative
61Crown indicator(Moed, De Bruin, & Van Leeuwen, 1995)Index Value ŦŦŦQuantitative
62Societal Quality Score(Mostert, Ellenbroek, Meijer, van Ark, & Klasen, 2010)Index ValueQuantitative
63PlumX Metrics(Lindsay, 2016)Index Value ŦŦŦQuantitative
64Positive reviews of creative publications and performances(Grant, Brutscher, Kirk, Butler, & Wooding, 2010)Yes / NoŦQualitative
65Non-academic publications in government reports(Penfield, Baker, Scoble, & Wykes, 2014)How many publications are done in government reports?Quantitative
66Non-academic citations in government reports(Penfield et al., 2014)How many citations are made in government reports?Quantitative
67Number of industrial contracts(ARC, 2018)How many industrial contracts are obtained?Quantitative
68Amount of industrial and academic funding(ARC, 2018)How much funding is secured?Quantitative
69Community awareness of research; Collaborative projects with end users(S. Hanney et al., 2007)Yes / NoŦQualitative
70Facilitation and participation in expert panels for research enquiries; external institution; steering committees and advisory boards(S. Hanney et al., 2007)Yes / NoŦŦMixed
71Use of research outcomes, discoveries or clinical trials as a best practice(Lewison, 2003; W. M. Trochim et al., 2008; Woolf, 2008)Yes / NoŦQualitative
72Use of research outcome in efficiency and better performance of services(Woolf, 2008)Yes / NoŦQualitative
73Provision of diversified and efficient intervention and treatment options for clinicians(Dougherty & Conway, 2008)Yes / NoŦQualitative
74Improved client care(Heller & de Melo-Martín, 2009; Kuruvilla et al., 2006; Mankoff et al., 2004; Pang et al., 2003; Pober et al., 2001; W. Trochim et al., 2011; Weiss, 2007; Westfall et al., 2007)Yes / NoŦQualitative
75The decrease in events of work-environment mistakes(Donaldson, Rutledge, & Ashley, 2004)What is the decrease rate of work-environment mistakes?Quantitative
76Increase in the provision of training of healthcare improvement from the healthcare providers to the support staff(S. R. Hanney et al., 2004; Mankoff et al., 2004; Pober et al., 2001; Sung et al., 2003)How many support staff are trained?Quantitative
77Improvement in technologies and information systems for social applications(B. Haynes & A. Haines, 1998; Kuruvilla et al., 2006)Yes / NoŦQualitative
78Increase in training development for system improvement(S. R. Hanney et al., 2004; Kuruvilla et al., 2006; Lewison, 2003; Mankoff et al., 2004; Pober et al., 2001; Sung et al., 2003)How many trainings are developed for healthcare improvements?Quantitative
79Creation of prevention methods for clinical practice(Heller & de Melo-Martín, 2009; Kuruvilla et al., 2006; Mankoff et al., 2004; Pang et al., 2003; Pober et al., 2001; W. Trochim et al., 2011; Weiss, 2007; Westfall et al., 2007)Yes / NoŦQualitative
80Adapting evidence-based practices(Donaldson et al., 2004; Dougherty & Conway, 2008; Grant, Cottrell, Cluzeau, & Fawcett, 2000; Kuruvilla et al., 2006; Westfall et al., 2007)Yes / NoŦQualitative
81Improvement in patient outcomes(Donaldson et al., 2004; Dougherty & Conway, 2008; Lewison, 2003; Weiss, 2007)Yes / NoŦQualitative
82Improvement in health behaviours enthusiasm of patients and general masses(Kuruvilla et al., 2006; Lewison, 2003; Woolf, 2008)Yes / NoŦQualitative
83Development and promulgation of guidelines and policies(Dougherty & Conway, 2008; Grant et al., 2000; S. R. Hanney et al., 2004; Brian Haynes & Andrew Haines, 1998; Kuruvilla et al., 2006; Lewison, 2003; Pang et al., 2003; W. Trochim et al., 2011)Yes / NoŦQualitative
84Progress in personal circumstances-based healthcare e.g. based on genetic sequencing(Mankoff et al., 2004; Zerhouni, 2007)Yes / NoŦQualitative
85Strengthening of service-client relationship(Woolf, 2008)Yes / NoŦQualitative
86Research outcome translation into medical practice for improvement(Dougherty & Conway, 2008; Kessler & Glasgow, 2011)Yes / NoŦQualitative
87Strengthening human protection through improved policies and better procedures(Weiss, 2007)Yes / NoŦQualitative
88Improvement in regulation for introducing advanced technologies, tools and techniques(Lewison, 2003)Yes / NoŦQualitative
89Compliance of ethical guidelines in research(Kuruvilla et al., 2006; Weiss, 2007)Yes / NoŦQualitative
90Development of community-based awareness(Sarli et al., 2010)Yes / NoŦQualitative
91Betterment of policies, guidelines and reimbursement systems for service providers(Sarli et al., 2010)Yes / NoŦQualitative
92Increased empowerment of service users(Kuruvilla et al., 2006)Yes / NoŦQualitative
93Support of research outcome and information for political decision and policy-making(Buxton & Hanney, 1996; B. Haynes & A. Haines, 1998; Kalucy et al., 2009; Pang et al., 2003)Yes / NoŦQualitative
94Improved public awareness about the environment and culture, Public behaviour change and advocacy; Increased literacy and numeracy rates(Grant et al., 2010; Raftery et al., 2016)Yes / NoŦQualitative
95Improvement in health literacy of health users and patients(Kuruvilla et al., 2006; Pang et al., 2003)Yes / NoŦQualitative
96Improvement in health status of health users and patients(Dougherty & Conway, 2008; Kuruvilla et al., 2006; Weiss, 2007)Yes / NoŦQualitative
97Establishment of public health, education or any other social schemes for a region(Woolf, 2008)Yes / NoŦQualitative
98The decrease in social disparities(Heller & de Melo-Martín, 2009; Kuruvilla et al., 2006; Zerhouni, 2007)Yes / NoŦQualitative
99Improvement in inter-organizational coordination for betterment in social sector(Sarli et al., 2010)Yes / NoŦQualitative
100Increase in planning efforts and program implementation related to social issues(Heller & de Melo-Martín, 2009; Woolf, 2008)Yes / NoŦQualitative
101Improvement in health user about health research(Weiss, 2007)Yes / NoŦQualitative
102Increased empowerment and knowledge of health users about health issues(Kuruvilla et al., 2006; Weiss, 2007)Yes / NoŦQualitative
103Better communication and perception of health users about health risks(Kuruvilla et al., 2006; Pober et al., 2001; Weiss, 2007)Yes / NoŦQualitative
104Expansion of health education, literacy and other social advantages(Kuruvilla et al., 2006)Yes / NoŦQualitative
105Improvement in Occupational Health and Safety Environment(Raftery et al., 2016; V. Williams, Eiseman, Landree, & Adamson, 2009)Yes / NoŦQualitative
106Disuse the law for obsoleting the existing method of drug approval(Maliha, 2018)Yes / NoŦQualitative
107Commercialization of new discovery, product or technology(Kuruvilla et al., 2006; Lavis et al., 2003; Woolf, 2008)Yes / NoŦQualitative
108Improvement in cost reducing techniques and effectiveness(Kuruvilla et al., 2006)Yes / NoŦQualitative
109Improvement in economic gains such as increased employment, health cost cut(Aries & Sclar, 1998; S. R. Hanney et al., 2004; Kalucy et al., 2009; Kuruvilla et al., 2006; RAND Europe, 2006; Weiss, 2007)Yes / NoŦŦMixed
110Development of new job opportunities and growth in the specific economic sector or geographical region(Aries & Sclar, 1998)Yes / NoŦŦMixed
111Development of medicinal products and therapeutic procedures(S. Hanney et al., 2007; Sarli et al., 2010)How many products or procedures are developed?Quantitative
112Improvement in a business environment, commercialization, technology incubation, products and processes(Buxton & Hanney, 1996)Yes / NoŦQualitative
113Reduction in work loss due to illness and increased benefits from a healthy workforce(CAHS, 2009)Yes / NoŦQualitative
114Increased Royalties, employment, Licences; creative works commissioned(Grant et al., 2010)Yes / NoŦŦMixed
115Creation of new knowledge about sustainable development and environmental protection for better future of the world(Kuruvilla et al., 2006)Yes / NoŦQualitative
116Improved environmental quality and sustainability(Engel-Cox, Van Houten, Phelps, & Rose, 2008)Yes / NoŦQualitative
117Reduced emissions; regeneration or arrested degradation of natural resources(Raftery et al., 2016)Yes / NoŦŦMixed
118Improved awareness of environmental impacts and legislation for protection(CAHS, 2009)Yes / NoŦQualitative
119development of mitigation methods for reducing environmental hazards and losses from natural disasters(Grant et al., 2010)Yes / NoŦŦMixed

Arsalan, M., Mubin, O., & Al Mahmud, A. (2019). Evidence-based nomenclature and taxonomy of research impact indicators. In Proceedings of the 17th International Conference on Scientometrics and Informetrics (ISSI 2019), 2–5 September 2019, Sapienza University of Rome, Italy.ArsalanM.MubinO.Al MahmudA.2019Evidence-based nomenclature and taxonomy of research impact indicatorsIn Proceedings of the 17th International Conference on Scientometrics and Informetrics (ISSI 2019)2–5 September 2019Sapienza University of Rome, ItalySearch in Google Scholar

Bennett, S., Reeve, R., Muir, K., Marjolin, A., & Powell, A. (2016). Orienting your journey: An approach for indicator assessment and selection. Retrieved from https://www.csi.edu.au/media/Orienting_Your_Journey_-_Change_Collection.pdfBennettS.ReeveR.MuirK.MarjolinA.PowellA.2016Orienting your journey: An approach for indicator assessment and selectionRetrieved from https://www.csi.edu.au/media/Orienting_Your_Journey_-_Change_Collection.pdfSearch in Google Scholar

Bernstein, A., Hicks, V., Borbey, P., & Campbell, T. (2006). A framework to measure the impact of investments in health research. Paper presented at the presentation to the Blue Sky II conference, What Indicators for Science, Technology and Innovation Policies in the 21st Century.BernsteinA.HicksV.BorbeyP.CampbellT.2006A framework to measure the impact of investments in health researchPaper presented at the presentation to the Blue Sky II conference, What Indicators for Science, Technology and Innovation Policies in the 21st CenturySearch in Google Scholar

Buxton, M., & Hanney, S. (1996). How can payback from health services research be assessed? Journal of Health Services Research & Policy, 1(1), 35–43.BuxtonM.HanneyS.1996How can payback from health services research be assessed?Journal of Health Services Research & Policy11354310.1177/135581969600100107Search in Google Scholar

Panel on Return on Investment in Health Research. (2009). Making an Impact: A Preferred Framework and Indicators to Measure Returns on Investment in Health Research, Canadian Academy of Health Sciences, Ottawa, ON, Canada.Panel on Return on Investment in Health Research2009Making an Impact: A Preferred Framework and Indicators to Measure Returns on Investment in Health ResearchCanadian Academy of Health SciencesOttawa, ON, CanadaSearch in Google Scholar

Deloitte Access Economics. (2011). Returns on NHMRC funded Research and Development. Commissioned by the Australian Society for Medical Research Sydney, Australia.Deloitte Access Economics2011Returns on NHMRC funded Research and DevelopmentCommissioned by the Australian Society for Medical Research SydneyAustraliaSearch in Google Scholar

Grant, J., Brutscher, P.B., Kirk, S.E., Butler, L., & Wooding, S. (2010). Capturing Research Impacts: A Review of International Practice. Documented Briefing. Rand Corporation, 92.GrantJ.BrutscherP.B.KirkS.E.ButlerL.WoodingS.2010Capturing Research Impacts: A Review of International PracticeDocumented Briefing. Rand Corporation, 92.Search in Google Scholar

Hayat, K. (2014). Nomenclature and its Importance in microbiology. Retrieved from https://medimoon.com/2014/04/nomenclature-and-its-importance-in-microbiology/HayatK.2014Nomenclature and its Importance in microbiologyRetrieved from https://medimoon.com/2014/04/nomenclature-and-its-importance-in-microbiology/Search in Google Scholar

Heller, C., & de Melo-Martín, I. (2009). Clinical and translational science awards: Can they increase the efficiency and speed of clinical and translational research? Academic Medicine, 84(4), 424–432.HellerC.de Melo-MartínI.2009Clinical and translational science awards: Can they increase the efficiency and speed of clinical and translational research?Academic Medicine84442443210.1097/ACM.0b013e31819a7d8119318771Search in Google Scholar

Kuruvilla, S., Mays, N., Pleasant, A., & Walt, G. (2006). Describing the impact of health research: A Research Impact Framework. BMC Health Serv Res, 6(1), 134. doi:10.1186/1472-6963-6-134KuruvillaS.MaysN.PleasantA.WaltG.2006Describing the impact of health research: A Research Impact FrameworkBMC Health Serv Res6113410.1186/1472-6963-6-134163504617049092Open DOISearch in Google Scholar

Lewison, G. (2003). Beyond outputs: New measures of biomedical research impact. Aslib Proceedings: New Information Perspectives, 55(1/2), 32–42.LewisonG.2003Beyond outputs: New measures of biomedical research impactAslib Proceedings: New Information Perspectives551/2324210.1108/00012530310462698Search in Google Scholar

Longabaugh, R., Fowler, D.R., Stout, R., & Kriebel, G. (1983). Validation of a problem-focused nomenclature. Archives of General Psychiatry, 40(4), 453–461.LongabaughR.FowlerD.R.StoutR.KriebelG.1983Validation of a problem-focused nomenclatureArchives of General Psychiatry40445346110.1001/archpsyc.1983.017900401070146601478Search in Google Scholar

Maliha, G. (2018). Obsolete to Useful to Obsolete Once Again: A History of Section 507 of the Food, Drug, and Cosmetic Act. Food and Drug Law Journal, 73(3), 405.MalihaG.2018Obsolete to Useful to Obsolete Once Again: A History of Section 507 of the Food, Drug, and Cosmetic ActFood and Drug Law Journal733405Search in Google Scholar

REF. (2012). Panel criteria and working methods. Retrieved from https://www.ref.ac.uk/2014/media/ref/content/pub/panelcriteriaandworkingmethods/01_12.pdfREF2012Panel criteria and working methodsRetrieved from https://www.ref.ac.uk/2014/media/ref/content/pub/panelcriteriaandworkingmethods/01_12.pdfSearch in Google Scholar

Trochim, W., Kane, C., Graham, M.J., & Pincus, H.A. (2011). Evaluating translational research: A process marker model. Clin Transl Sci, 4(3), 153–162. doi:10.1111/j.1752-8062.2011.00291.xTrochimW.KaneC.GrahamM.J.PincusH.A.2011Evaluating translational research: A process marker modelClin Transl Sci4315316210.1111/j.1752-8062.2011.00291.x312560821707944Open DOISearch in Google Scholar

United Way of America. (1996). Measuring program outcomes: A practical approach. Retrieved from https://digitalcommons.unomaha.edu/slceeval/47United Way of America1996Measuring program outcomes: A practical approachRetrieved from https://digitalcommons.unomaha.edu/slceeval/47Search in Google Scholar

Vinkler, P. (2010a). The evaluation of research by scientometric indicators: Elsevier.VinklerP.2010aThe evaluation of research by scientometric indicatorsElsevier10.1533/9781780630250Search in Google Scholar

Vinkler, P. (2010b). Indicators are the essence of scientometrics and bibliometrics. Scientometrics, 85(3), 861–866. doi:10.1007/s11192-010-0159-yVinklerP.2010bIndicators are the essence of scientometrics and bibliometricsScientometrics85386186610.1007/s11192-010-0159-yOpen DOISearch in Google Scholar

Vinner, S., & Dreyfus, T. (1989). Images and definitions for the concept of function. Journal for Research in Mathematics Education, 356–366.VinnerS.DreyfusT.1989Images and definitions for the concept of functionJournal for Research in Mathematics Education35636610.2307/749441Search in Google Scholar

VSNU, KNAW, & NWO. (2009). Standard evaluation Protocol 2009–2015: Protocol for research assessment in The Netherlands. Retrieved from https://www.knaw.nl/en/news/publications/standard-evaluation-protocol-sep-2009-2015VSNU, KNAW, & NWO2009Standard evaluation Protocol 2009–2015: Protocol for research assessment in The NetherlandsRetrieved from https://www.knaw.nl/en/news/publications/standard-evaluation-protocol-sep-2009-2015Search in Google Scholar

Weiss, A.P. (2007). Measuring the impact of medical research: Moving from outputs to outcomes. Am J Psychiatry, 164(2), 206–214. doi:10.1176/ajp.2007.164.2.206WeissA.P.2007Measuring the impact of medical research: Moving from outputs to outcomesAm J Psychiatry164220621410.1176/ajp.2007.164.2.20617267781Open DOISearch in Google Scholar

Wellcome Trust. (2009). How we are making a difference: Assessment framework report summary. Retrieved from https://wellcome.ac.uk/sites/default/files/wtx059577_0.pdfWellcome Trust2009How we are making a difference: Assessment framework report summaryRetrieved from https://wellcome.ac.uk/sites/default/files/wtx059577_0.pdfSearch in Google Scholar

Wiegers, S.E., Houser, S.R., Pearson, H.E., Untalan, A., Cheung, J.Y., Fisher, S.G., . . ., & Feldman, A.M. (2015). A Metric-Based System for Evaluating the Productivity of Preclinical Faculty at an Academic Medical Center in the Era of Clinical and Translational Science. Clinical and translational science, 8(4), 357–361. Retrieved from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5351026/pdf/CTS-8-357.pdfWiegersS.E.HouserS.R.PearsonH.E.UntalanA.CheungJ.Y.FisherS.G.FeldmanA.M.2015A Metric-Based System for Evaluating the Productivity of Preclinical Faculty at an Academic Medical Center in the Era of Clinical and Translational ScienceClinical and translational science84357361Retrieved from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5351026/pdf/CTS-8-357.pdf10.1111/cts.12269535102625740181Search in Google Scholar

ARC. (2018). Excellence in Research for Australia (ERA) 2018: Submission Guidelines. Retrieved from http://www.arc.gov.au/sites/default/files/filedepot/Public/ERA/ERA%202018/ERA%202018%20Submission%20Guidelines.pdfARC2018Excellence in Research for Australia (ERA) 2018: Submission GuidelinesRetrieved from http://www.arc.gov.au/sites/default/files/filedepot/Public/ERA/ERA%202018/ERA%202018%20Submission%20Guidelines.pdfSearch in Google Scholar

Archambault, E., & Lariviere, V. (2009). History of the journal impact factor: Contingencies and consequences. Scientometrics, 79(3), 635–649. doi:10.1007/s11192-007-2036-xArchambaultE.LariviereV.2009History of the journal impact factor: Contingencies and consequencesScientometrics79363564910.1007/s11192-007-2036-xOpen DOISearch in Google Scholar

Aries, N.R., & Sclar, E.D. (1998). The economic impact of biomedical research: a case study of voluntary institutions in the New York metropolitan region. J Health Polit Policy Law, 23(1), 175–193. Retrieved from https://www.ncbi.nlm.nih.gov/pubmed/9522285AriesN.R.SclarE.D.1998The economic impact of biomedical research: a case study of voluntary institutions in the New York metropolitan regionJ Health Polit Policy Law231175193Retrieved from https://www.ncbi.nlm.nih.gov/pubmed/952228510.1215/03616878-23-1-1759522285Search in Google Scholar

Boell, S.K., & Wilson, C.S. (2010). Journal Impact Factors for evaluating scientific performance: use of h-like indicators. Scientometrics, 82(3), 613–626. doi:10.1007/s11192-010-0175-yBoellS.K.WilsonC.S.2010Journal Impact Factors for evaluating scientific performance: use of h-like indicatorsScientometrics82361362610.1007/s11192-010-0175-yOpen DOISearch in Google Scholar

Buxton, M., & Hanney, S. (1996). How can payback from health services research be assessed? J Health Serv Res Policy, 1(1), 35–43.BuxtonM.HanneyS.1996How can payback from health services research be assessed?J Health Serv Res Policy11354310.1177/135581969600100107Search in Google Scholar

CAHS. (2009). Making an Impact: A Preferred Framework and Indicators to Measure Returns on Investment in Health Research. Retrieved from Otawwa, Canada:CAHS2009Making an Impact: A Preferred Framework and Indicators to Measure Returns on Investment in Health ResearchRetrieved from Otawwa, Canada:Search in Google Scholar

Donaldson, N.E., Rutledge, D.N., & Ashley, J. (2004). Outcomes of adoption: measuring evidence uptake by individuals and organizations. Worldviews Evid Based Nurs, 1 Suppl 1(s1), S41–51. doi:10.1111/j.1524-475X.2004.04048.xDonaldsonN.E.RutledgeD.N.AshleyJ.2004Outcomes of adoption: measuring evidence uptake by individuals and organizationsWorldviews Evid Based Nurs1Suppl 1(s1)S415110.1111/j.1524-475X.2004.04048.x17129334Open DOISearch in Google Scholar

Dougherty, D., & Conway, P.H. (2008). The “3T's” road map to transform US health care: the “how” of high-quality care. Jama, 299(19), 2319–2321. Retrieved from https://jamanetwork.com/journals/jama/articlepdf/181916/jco80037_2319_2321.pdfDoughertyD.ConwayP.H.2008The “3T's” road map to transform US health care: the “how” of high-quality careJama2991923192321Retrieved from https://jamanetwork.com/journals/jama/articlepdf/181916/jco80037_2319_2321.pdf10.1001/jama.299.19.231918492974Search in Google Scholar

Egghe, L. (2006). Theory and practise of the g-index. Scientometrics, 69(1), 131–152. doi:10.1007/s11192-006-0144-7EggheL.2006Theory and practise of the g-indexScientometrics69113115210.1007/s11192-006-0144-7Open DOISearch in Google Scholar

Engel-Cox, J.A., Van Houten, B., Phelps, J., & Rose, S.W. (2008). Conceptual model of comprehensive research metrics for improved human health and environment. Environ Health Perspect, 116(5), 583–592. doi:10.1289/ehp.10925Engel-CoxJ.A.Van HoutenB.PhelpsJ.RoseS.W.2008Conceptual model of comprehensive research metrics for improved human health and environmentEnviron Health Perspect116558359210.1289/ehp.10925236767618470312Open DOISearch in Google Scholar

Fu, H. Z., & Ho, Y.S. (2014). Top cited articles in adsorption research using Y-index. Research Evaluation, 23(1), 12–20. doi:10.1093/reseval/rvt018FuH. Z.HoY.S.2014Top cited articles in adsorption research using Y-indexResearch Evaluation231122010.1093/reseval/rvt018Open DOISearch in Google Scholar

Garfield, E. (2006). Citation indexes for science. A new dimension in documentation through association of ideas. 1955. Int J Epidemiol, 35(5), 1123–1127; discussion 1127–1128. doi:10.1093/ije/dyl189GarfieldE.2006Citation indexes for science. A new dimension in documentation through association of ideas. 1955Int J Epidemiol35511231127discussion 1127–1128.10.1093/ije/dyl18916987841Open DOISearch in Google Scholar

Gordon, M., & Meadows, A. (1981). The dissemination of findings of DHSS funded research. Primary Communications Research Centre, Leicester: University of Leicester.GordonM.MeadowsA.1981The dissemination of findings of DHSS funded researchPrimary Communications Research Centre, LeicesterUniversity of LeicesterSearch in Google Scholar

Grant, J., Brutscher, P.-B., Kirk, S.E., Butler, L., & Wooding, S. (2010). Capturing Research Impacts: A Review of International Practice. Documented Briefing. Rand Corporation.GrantJ.BrutscherP.-B.KirkS.E.ButlerL.WoodingS.2010Capturing Research Impacts: A Review of International PracticeDocumented Briefing.Rand CorporationSearch in Google Scholar

Grant, J., Cottrell, R., Cluzeau, F., & Fawcett, G. (2000). Evaluating “payback” on biomedical research from papers cited in clinical guidelines: applied bibliometric study. Bmj, 320(7242), 1107–1111. Retrieved from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC27352/pdf/1107.pdfGrantJ.CottrellR.CluzeauF.FawcettG.2000Evaluating “payback” on biomedical research from papers cited in clinical guidelines: applied bibliometric studyBmj320724211071111Retrieved from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC27352/pdf/1107.pdf10.1136/bmj.320.7242.11072735210775218Search in Google Scholar

Hanney, S., Buxton, M., Green, C., Coulson, D., & Raftery, J. (2007). An assessment of the impact of the NHS Health Technology Assessment Programme. Health Technol Assess, 11(53), iii–iv, ix–xi, 1–180.HanneyS.BuxtonM.GreenC.CoulsonD.RafteryJ.2007An assessment of the impact of the NHS Health Technology Assessment ProgrammeHealth Technol Assess1153iiiivixxi118010.3310/hta1153018031652Search in Google Scholar

Hanney, S. R., Grant, J., Wooding, S., & Buxton, M.J. (2004). Proposed methods for reviewing the outcomes of health research: the impact of funding by the UK's’ Arthritis Research Campaign’. Health Research Policy and Systems, 2(1), 4. Retrieved from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC503400/pdf/1478-4505-2-4.pdfHanneyS. R.GrantJ.WoodingS.BuxtonM.J.2004Proposed methods for reviewing the outcomes of health research: the impact of funding by the UK's’ Arthritis Research Campaign’Health Research Policy and Systems214Retrieved from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC503400/pdf/1478-4505-2-4.pdf10.1186/1478-4505-2-450340015272939Search in Google Scholar

Harzing, A.-W.K. (2010). The publish or perish book: Tarma software research Melbourne.HarzingA.-W.K.2010The publish or perish book: Tarma software research MelbourneSearch in Google Scholar

Haynes, B., & Haines, A. (1998). Barriers and bridges to evidence based clinical practice. Bmj, 317(7153), 273–276. Retrieved from https://www.ncbi.nlm.nih.gov/pubmed/9677226HaynesB.HainesA.1998Barriers and bridges to evidence based clinical practiceBmj3177153273276Retrieved from https://www.ncbi.nlm.nih.gov/pubmed/967722610.1002/9780470755891.ch10Search in Google Scholar

Haynes, B., & Haines, A. (1998). Barriers and bridges to evidence based clinical practice. Bmj, 317(7153), 273–276. Retrieved from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1113594/pdf/273.pdfHaynesB.HainesA.1998Barriers and bridges to evidence based clinical practiceBmj3177153273276Retrieved from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1113594/pdf/273.pdf10.1002/9780470755891.ch10Search in Google Scholar

Heller, C., & de Melo-Martín, I. (2009). Clinical and Translational Science Awards: can they increase the efficiency and speed of clinical and translational research? Academic Medicine, 84(4), 424–432.HellerC.de Melo-MartínI.2009Clinical and Translational Science Awards: can they increase the efficiency and speed of clinical and translational research?Academic Medicine84442443210.1097/ACM.0b013e31819a7d8119318771Search in Google Scholar

Hirsch, J. E. (2005). An index to quantify an individual's scientific research output. Proc Natl Acad Sci U S A, 102(46), 16569–16572. doi:10.1073/pnas.0507655102HirschJ. E.2005An index to quantify an individual's scientific research outputProc Natl Acad Sci U S A10246165691657210.1073/pnas.0507655102128383216275915Open DOISearch in Google Scholar

Hoffmann, C.P., Lutz, C., & Meckel, M. (2016). A relational altmetric? Network centrality on ResearchGate as an indicator of scientific impact. Journal of the Association for Information Science and Technology, 67(4), 765–775. doi:10.1002/asi.23423HoffmannC.P.LutzC.MeckelM.2016A relational altmetric? Network centrality on ResearchGate as an indicator of scientific impactJournal of the Association for Information Science and Technology67476577510.1002/asi.23423Open DOISearch in Google Scholar

Järvelin, K., & Persson, O. (2008). The DCI index: Discounted cumulated impact-based research evaluation. Journal of the American Society for Information Science and Technology, 59(9), 1433–1440. doi:10.1002/asi.20847JärvelinK.PerssonO.2008The DCI index: Discounted cumulated impact-based research evaluationJournal of the American Society for Information Science and Technology5991433144010.1002/asi.20847Open DOISearch in Google Scholar

Jin, B.H., Liang, L.M., Rousseau, R., & Egghe, L. (2007). The R- and AR-indices: Complementing the h-index. Chinese science bulletin, 52(6), 855–863. doi:10.1007/s11434-007-0145-9JinB.H.LiangL.M.RousseauR.EggheL.2007The R- and AR-indices: Complementing the h-indexChinese science bulletin52685586310.1007/s11434-007-0145-9Open DOISearch in Google Scholar

Kalucy, E.C., Jackson-Bowers, E., McIntyre, E., & Reed, R. (2009). The feasibility of determining the impact of primary health care research projects using the Payback Framework. Health Research Policy and Systems, 7(1), 11.KalucyE.C.Jackson-BowersE.McIntyreE.ReedR.2009The feasibility of determining the impact of primary health care research projects using the Payback FrameworkHealth Research Policy and Systems711110.1186/1478-4505-7-11268453019422717Search in Google Scholar

Kessler, R., & Glasgow, R.E. (2011). A Proposal to Speed Translation of Healthcare Research Into Practice Dramatic Change Is Needed. American journal of preventive medicine, 40(6), 637–644. doi:10.1016/j.amepre.2011.02.023KesslerR.GlasgowR.E.2011A Proposal to Speed Translation of Healthcare Research Into Practice Dramatic Change Is NeededAmerican journal of preventive medicine40663764410.1016/j.amepre.2011.02.02321565657Open DOISearch in Google Scholar

Kuruvilla, S., Mays, N., Pleasant, A., & Walt, G. (2006). Describing the impact of health research: a Research Impact Framework. BMC Health Serv Res, 6(1), 134. doi:10.1186/1472-6963-6-134KuruvillaS.MaysN.PleasantA.WaltG.2006Describing the impact of health research: a Research Impact FrameworkBMC Health Serv Res6113410.1186/1472-6963-6-134163504617049092Open DOISearch in Google Scholar

Largent, M. A., & Lane, J.I. (2012). STAR METRICS and the Science of Science Policy. Review of Policy Research, 29(3), 431–438. doi:10.1111/j.1541-1338.2012.00567.xLargentM. A.LaneJ.I.2012STAR METRICS and the Science of Science PolicyReview of Policy Research29343143810.1111/j.1541-1338.2012.00567.xOpen DOISearch in Google Scholar

Lavis, J., Ross, S., McLeod, C., & Gildiner, A. (2003). Measuring the impact of health research. J Health Serv Res Policy, 8(3), 165–170. doi:10.1258/135581903322029520LavisJ.RossS.McLeodC.GildinerA.2003Measuring the impact of health researchJ Health Serv Res Policy8316517010.1258/13558190332202952012869343Open DOISearch in Google Scholar

Lewison, G. (2003). Beyond outputs: new measures of biomedical research impact. Paper presented at the Aslib Proceedings.LewisonG.2003Beyond outputs: new measures of biomedical research impactPaper presented at the Aslib Proceedings10.1108/00012530310462698Search in Google Scholar

Lindsay, J. M. (2016). PlumX from plum analytics: not just altmetrics. Journal of Electronic Resources in Medical Libraries, 13(1), 8–17.LindsayJ. M.2016PlumX from plum analytics: not just altmetricsJournal of Electronic Resources in Medical Libraries13181710.1080/15424065.2016.1142836Search in Google Scholar

Maliha, G. (2018). Obsolete to Useful to Obsolete Once Again: A History of Section 507 of the Food, Drug, and Cosmetic Act.MalihaG.2018Obsolete to Useful to Obsolete Once Again: A History of Section 507 of the Food, Drug, and Cosmetic ActSearch in Google Scholar

Mankoff, S. P., Brander, C., Ferrone, S., & Marincola, F.M. (2004). Lost in Translation: Obstacles to Translational Medicine. J Transl Med, 2(1), 14. doi:10.1186/1479-5876-2-14MankoffS. P.BranderC.FerroneS.MarincolaF.M.2004Lost in Translation: Obstacles to Translational MedicineJ Transl Med211410.1186/1479-5876-2-1442026115149545Open DOISearch in Google Scholar

Moed, H., De Bruin, R., & Van Leeuwen, T. (1995). New bibliometric tools for the assessment of national research performance: Database description, overview of indicators and first applications. Scientometrics, 33(3), 381–422.MoedH.De BruinR.Van LeeuwenT.1995New bibliometric tools for the assessment of national research performance: Database description, overview of indicators and first applicationsScientometrics33338142210.1007/BF02017338Search in Google Scholar

Mostert, S. P., Ellenbroek, S.P.H., Meijer, I., van Ark, G., & Klasen, E.C. (2010). Societal output and use of research performed by health research groups. Health Research Policy and Systems, 8(1), 30. doi:10.1186/1478-4505-8-30MostertS. P.EllenbroekS.P.H.MeijerI.van ArkG.KlasenE.C.2010Societal output and use of research performed by health research groupsHealth Research Policy and Systems813010.1186/1478-4505-8-30296471420939915Open DOISearch in Google Scholar

Nathan, D.G. (2002). Careers in translational clinical research—historical perspectives, future challenges. Jama, 287(18), 2424–2427. Retrieved from https://jamanetwork.com/journals/jama/articlepdf/194890/JCO20035.pdfNathanD.G.2002Careers in translational clinical research—historical perspectives, future challengesJama2871824242427Retrieved from https://jamanetwork.com/journals/jama/articlepdf/194890/JCO20035.pdf10.1001/jama.287.18.242411988063Search in Google Scholar

Pang, T., Sadana, R., Hanney, S., Bhutta, Z.A., Hyder, A.A., & Simon, J. (2003). Knowledge for better health: a conceptual framework and foundation for health research systems. Bull World Health Organ, 81(11), 815–820. Retrieved from https://www.ncbi.nlm.nih.gov/pubmed/14758408PangT.SadanaR.HanneyS.BhuttaZ.A.HyderA.A.SimonJ.2003Knowledge for better health: a conceptual framework and foundation for health research systemsBull World Health Organ8111815820Retrieved from https://www.ncbi.nlm.nih.gov/pubmed/14758408Search in Google Scholar

Penfield, T., Baker, M.J., Scoble, R., & Wykes, M.C. (2014). Assessment, evaluations, and definitions of research impact: A review. Research Evaluation, 23(1), 21–32. doi:10.1093/reseval/rvt021PenfieldT.BakerM.J.ScobleR.WykesM.C.2014Assessment, evaluations, and definitions of research impact: A reviewResearch Evaluation231213210.1093/reseval/rvt021Open DOISearch in Google Scholar

Pober, J.S., Neuhauser, C.S., & Pober, J.M. (2001). Obstacles facing translational research in academic medical centers. FASEB J, 15(13), 2303–2313. doi:10.1096/fj.01-0540lsfPoberJ.S.NeuhauserC.S.PoberJ.M.2001Obstacles facing translational research in academic medical centersFASEB J15132303231310.1096/fj.01-0540lsf11689456Open DOISearch in Google Scholar

Raftery, J., Hanney, S., Greenhalgh, T., Glover, M., & Blatch-Jones, A. (2016). Models and applications for measuring the impact of health research: update of a systematic review for the Health Technology Assessment programme. Health Technol Assess, 20(76), 1–254. doi:10.3310/hta20760RafteryJ.HanneyS.GreenhalghT.GloverM.Blatch-JonesA.2016Models and applications for measuring the impact of health research: update of a systematic review for the Health Technology Assessment programmeHealth Technol Assess2076125410.3310/hta20760508659627767013Open DOISearch in Google Scholar

RAND Europe. (2006). Measuring the benefits from research. Cambridge, England. Retrieved from https://www.rand.org/content/dam/rand/pubs/research_briefs/2007/RAND_RB9202.pdfRAND Europe2006Measuring the benefits from researchCambridge, EnglandRetrieved from https://www.rand.org/content/dam/rand/pubs/research_briefs/2007/RAND_RB9202.pdfSearch in Google Scholar

Sarli, C.C., Dubinsky, E.K., & Holmes, K.L. (2010). Beyond citation analysis: a model for assessment of research impact. J Med Libr Assoc, 98(1), 17–23. doi:10.3163/1536-5050.98.1.008SarliC.C.DubinskyE.K.HolmesK.L.2010Beyond citation analysis: a model for assessment of research impactJ Med Libr Assoc981172310.3163/1536-5050.98.1.008280196320098647Open DOISearch in Google Scholar

Sidiropoulos, A., Katsaros, D., & Manolopoulos, Y. (2007). Generalized Hirsch h-index for disclosing latent facts in citation networks. Scientometrics, 72(2), 253–280. doi:10.1007/s11192-007-1722-zSidiropoulosA.KatsarosD.ManolopoulosY.2007Generalized Hirsch h-index for disclosing latent facts in citation networksScientometrics72225328010.1007/s11192-007-1722-zOpen DOISearch in Google Scholar

Sung, N.S., Crowley, W.F., Jr., Genel, M., Salber, P., Sandy, L., Sherwood, L.M., . . . Rimoin, D. (2003). Central challenges facing the national clinical research enterprise. Jama, 289(10), 1278–1287. Retrieved from https://www.ncbi.nlm.nih.gov/pubmed/12633190SungN.S.CrowleyW.F.Jr.GenelM.SalberP.SandyL.SherwoodL.M.RimoinD.2003Central challenges facing the national clinical research enterpriseJama2891012781287Retrieved from https://www.ncbi.nlm.nih.gov/pubmed/1263319010.1001/jama.289.10.127812633190Search in Google Scholar

Torres-Salinas, D., Moreno-Torres, J.G., Delgado-López-Cózar, E., & Herrera, F. (2011). A methodology for Institution-Field ranking based on a bidimensional analysis: the IFQ 2 A index. Scientometrics, 88(3), 771–786. doi:10.1007/s11192-011-0418-6Torres-SalinasD.Moreno-TorresJ.G.Delgado-López-CózarE.HerreraF.2011A methodology for Institution-Field ranking based on a bidimensional analysis: the IFQ 2 A indexScientometrics88377178610.1007/s11192-011-0418-6Open DOISearch in Google Scholar

Trochim, W., Kane, C., Graham, M.J., & Pincus, H.A. (2011). Evaluating translational research: a process marker model. Clin Transl Sci, 4(3), 153–162. doi:10.1111/j.1752-8062.2011.00291.xTrochimW.KaneC.GrahamM.J.PincusH.A.2011Evaluating translational research: a process marker modelClin Transl Sci4315316210.1111/j.1752-8062.2011.00291.x312560821707944Open DOISearch in Google Scholar

Trochim, W.M., Marcus, S.E., Masse, L.C., Moser, R.P., & Weld, P.C. (2008). The evaluation of large research initiatives—A participatory integrative mixed-methods approach. American Journal of Evaluation, 29(1), 8–28. doi:10.1177/1098214007309280TrochimW.M.MarcusS.E.MasseL.C.MoserR.P.WeldP.C.2008The evaluation of large research initiatives—A participatory integrative mixed-methods approachAmerican Journal of Evaluation29182810.1177/1098214007309280Open DOISearch in Google Scholar

Vanclay, J.K., & Bornmann, L. (2012). Metrics to evaluate research performance in academic institutions: a critique of ERA 2010 as applied in forestry and the indirect H-2 index as a possible alternative. Scientometrics, 91(3), 751–771. doi:10.1007/s11192-012-0618-8VanclayJ.K.BornmannL.2012Metrics to evaluate research performance in academic institutions: a critique of ERA 2010 as applied in forestry and the indirect H-2 index as a possible alternativeScientometrics91375177110.1007/s11192-012-0618-8Open DOISearch in Google Scholar

Vinkler, P. (20 14). The use of the Percentage Rank Position index for comparative evaluation of journals. Journal of Informetrics, 8(2), 340–348. doi:10.1016/j.joi.2014.01.001VinklerP.2014The use of the Percentage Rank Position index for comparative evaluation of journalsJournal of Informetrics8234034810.1016/j.joi.2014.01.001Open DOISearch in Google Scholar

Wang, L., Wen, H., & Liu, Y. (2016). AHP Based Quantitative Evaluation Index System of Teacher's Research Performance in the University. International Journal of Multimedia and Ubiquitous Engineering, 11(7), 391–402. doi:10.14257/ijmue.2016.11.7.39WangL.WenH.LiuY.2016AHP Based Quantitative Evaluation Index System of Teacher's Research Performance in the UniversityInternational Journal of Multimedia and Ubiquitous Engineering11739140210.14257/ijmue.2016.11.7.39Open DOISearch in Google Scholar

Weiss, A.P. (2 007). Measuring the impact of medical research: moving from outputs to outcomes. Am J Psychiatry, 164(2), 206–214. doi:10.1176/ajp.2007.164.2.206WeissA.P.2007Measuring the impact of medical research: moving from outputs to outcomesAm J Psychiatry164220621410.1176/ajp.2007.164.2.20617267781Open DOISearch in Google Scholar

Westfall, J.M., Mold, J., & Fagnan, L. (2007). Practice-based research—“Blue Highways” on the NIH roadmap. Jama, 297(4), 403–406. Retrieved from https://jamanetwork.com/journals/jama/articlepdf/205216/jco60049_403_406.pdfWestfallJ.M.MoldJ.FagnanL.2007Practice-based research—“Blue Highways” on the NIH roadmapJama2974403406Retrieved from https://jamanetwork.com/journals/jama/articlepdf/205216/jco60049_403_406.pdf10.1001/jama.297.4.40317244837Search in Google Scholar

Williams, A.E. (20 17). Altmetrics: an overview and evaluation. Online Information Review, 41(3), 311–317. doi:10.1108/Oir-10-2016-0294WilliamsA.E.2017Altmetrics: an overview and evaluationOnline Information Review41331131710.1108/Oir-10-2016-0294Open DOISearch in Google Scholar

Williams, V., Eiseman, E., Landree, E., & Adamson, D. (2009). Demonstrating and Communicating Research Impact. Preparing NIOSH Programs for External Review. Retrieved fromWilliamsV.EisemanE.LandreeE.AdamsonD.2009Demonstrating and Communicating Research ImpactPreparing NIOSH Programs for External ReviewRetrieved fromSearch in Google Scholar

Woolf, S.H. (2008). The meaning of translational research and why it matters. Jama, 299(2), 211–213. doi:10.1001/jama.2007.26WoolfS.H.2008The meaning of translational research and why it mattersJama299221121310.1001/jama.2007.2618182604Open DOISearch in Google Scholar

Zerhouni, E.A. (20 07). Translational research: moving discovery to practice. Clin Pharmacol Ther, 81(1), 126–128. doi:10.1038/sj.clpt.6100029ZerhouniE.A.2007Translational research: moving discovery to practiceClin Pharmacol Ther81112612810.1038/sj.clpt.610002917186011Open DOISearch in Google Scholar

Zhai, L., Yan, X., & Zhu, B. (2013). The Hl-index: improvement of H-index based on quality of citing papers. Scientometrics, 98(2), 1021–1031. doi:10.1007/s11192-013-1039-zZhaiL.YanX.ZhuB.2013The Hl-index: improvement of H-index based on quality of citing papersScientometrics9821021103110.1007/s11192-013-1039-zOpen DOISearch in Google Scholar

Recommended articles from Trend MD

Plan your remote conference with Sciendo