Cite

Introduction

Research Impact (RI) is a broad topic of scientometrics to support the progress of science and monitoring the influence of efforts made by the government, institutions, societies, programs, and individual researchers. There are several documented and popular RI assessment methods developed by individuals and organisations for evaluating the research of a particular programme or general-purpose. This intent has created the diversity in evaluation methods, frameworks and scope. Some approaches focus only on the impacts related to academic recognition and use, such as Bibliometric Measures. However, the growing technology, educational networking, effective and targeted research strategies, and regular monitoring of RI are reducing the gap between the research producers and consumers. As a result, the horizon of RI is expanding and covering other areas of impacts such as on economy, society, and environment.

Many individuals and organizations have introduced measures and indicators for assessing the RI. Nevertheless, due to diversity in nature and scale of RI, not a single method is considered robust and complete (Vinkler, 2010). Therefore, new measures and indicators are being introduced on time to time basis according to the interest and availability of resources of the method designers (Canadian Academy of Health Sciences, 2009). Additionally, higher availability of national and international funding for health sciences is critically influencing the science of RI assessment (Heller & de Melo-Martín, 2009). It means there are more indicators, measures, and frameworks for health-related research than any other areas of science. Resultantly, there is a considerable gap available for generalizability and transformability of health-related efforts to rest of the science.

At large, this study aims to discover the evidence-based diversity of RI indicators and to develop a method. In this regard, Nomenclature of RI indicators is developed based on divide and rule principle to achieve the objective. Additionally, taxonomical analysis is presented based on the primary components of nomenclature. This effort is a step forward to develop a robust and inclusive RI assessment method. The concept of this paper was initially presented in the 17th International Conference on Scientometrics and Informetrics (ISSI 2019) in the form of a poster (Arsalan, Mubin, & Al Mahmud, 2019).

Review of literature

As per the broader understanding, metrics and indicators are different. It is clear from the semantics as “indicator” indicates the research impact, whereas metrics are the measurement of research impact. According to P. Vinkler (2010), an indicator should be read as a meaningful representative, which indicate the performance of a system as per its design objective. Metrics, on the other side, provide additional quantitative information about the impact of a system (Lewison, 2003). However, diversity of effects can create a problematic situation to measure all aspects quantitatively. Therefore, pragmatically indicators have the potential to illustrate useful and broader impacts as compared to the metrics (Vinkler, 2010).

There are multiple reasons for diversity in research impact indicators. For instance, Bennett et al. (2016) explained 10-points criteria for making research impact indicators from a technical and contextual point of view. In this regard, indicators should be specific, validated, reliable, comparable, substantial, accessible, acceptable, appropriate, useable, and feasible. Consequently, diversity in criteria required range of indicators to fulfil the conditions in diversified context. Canadian Academy of Health Sciences (2009) explained another reason for this diversity, i.e. strategy of indicator selection. This strategy describes three basic principles.

The indicator should answer the specific question of evaluation

The indicator should satisfy the level of aggregation

The indicator should be read with other indicators to complement the strength of evaluation

In other words, every indicator only explains the impact of research in minimal dimension, covers a very specific level of aggregation, and has a very limited power of defining the research impact (REF, 2012). As a result, we need a bundle of indicators that fulfil the strategic requirements of evaluators.

“Nomenclature” is a combination of two Latin words viz. “nomen” means “name” and “calare” means “to call”. It is a scientific process in any discipline to assign the names to essential components according to the predefined rules and standards (Hayat, 2014). Generally, these rules are outlined in the form of a classification scheme. Therefore, for nomenclature, the classification system is highly significant. Longabaugh et al. (1983) introduced the problem-focused nomenclature in medical science, which is a coding system with a specific objective. They argued that the problem-focused approach provides better control for organizing and problem management. The similar concept can be applied in any branch of science to organize the objects concerning a problem-focused classification system.

Classification and organization of research impact indicators are not new (Vinkler, 2010). However, the nomenclature or taxonomy approach is missing. Therefore, standardization is globally missing. Every effort of research impact assessment distinctly organized the indicators according to the technical and contextual requirements. Nonetheless, based on the context, the classification scheme of indicators can be arranged in four groups.

Impact Categories and Domains

Impact Time and Pathways

Impact in Specific Dimension

Uncategorised

In many research impact assessment methods, the adopted organization of impact indicators is based on impact categories and domains. These methods are wide-scope and open to select indicators in any of their classes (Bernstein et al., 2006). Payback framework for assessing the impact of health research is one of the classical methods falls under this group, for instance (Buxton & Hanney, 1996). It was developed by the Health Economics Research Group at Brunel University in 1996 by Buxton and Hanney (1996). It organizes the indicators in multi-dimensional categories including knowledge, research benefits, political, and administrative benefits, health sector benefits, and broader economic benefits.

The second group that follows the impact time and pathways are based on the concept of output and outcome. The understanding of the difference between output and outcome was first explained by United Way of America (1996) in the form of logic modelling. This model explicitly defines inputs, process and outputs in the form of resources, activities and products, respectively. Whereas, the outcome is a benefit to the population of interest. Weiss (2007) split the outcomes of health research into initial, intermediate, and long-term impacts. This time-bound approach represents a sequence or a chain of effects. For instance, awareness of new research in decision-making community is an initial outcome. That awareness can lead to a change in clinical practice as an intermediate outcome. Ultimately, the long-term outcome is the improvement in the health of patients.

The third approach is exclusive. Many organizations and individual researchers are keen to know the impact of research only in one area in depth. One example is the monetary value approach presented by Deloitte Access Economics (2011). In this approach, all indicators and measures are solely related to the economic impacts of research. Some other methods are organization-specific where a scoring system is limited in scope and developed in a local context. We cannot fit them in any above mentioned organized structure, for instance, The Wellcome Trust's Assessment Framework (Wellcome Trust, 2009), Matrix Scoring System (Wiegers et al., 2015), and Royal Netherlands Academy of Arts and Sciences Approach (VSNU, KNAW, & NWO, 2009).

Although the organizations of indicators within a research impact framework has been a mandatory part of every evaluation method, there is still a need to organize the indicators based on criteria and rules. A classic example of diversity and heterogeneity can be seen in REF (2012) where more than 100 indicators are applied based on subject domains and target areas of socio-economic interest. There is still a need to adopt a mechanism where these indicators can be generalized and transformed on taxonomical structures.

Method
Collection of research impact indicators

We systematically explored the literature databases, including Scopus, WebMD, ACM DL, IEEE Xplore, Web of Science and Google Scholar to collect research articles providing RI assessment indicators and methods. In many cases, organizations published the frameworks and guidelines in the form of technical reports; therefore, grey literature was also considered.

Multiple combinations of literature-searching keywords were used with their synonyms. These include but not limited to the “research impact”, “research productivity”, “research quality”, “research impact indicators”, “research impact assessment”, “research impact assessment method”, “research impact assessment framework”, “scientometric indicators”, “bibliometric indicators”, “economic indicators”, “social indicators”, and “environmental indicators”. The purpose of using a combination of these keywords was to identify theoretical or applied studies related to the research impact assessment. In theoretical or conceptual studies, we found the constructs and mechanisms of research impact assessment methods applied studies provided the demonstration of assessment methods in the form of case studies. We also found some review articles, which provided a comparison of different RI assessment approaches. However, in this study, we mainly focused on the preparation of RI indicators. Due to using multiple combinations of keywords and databases, we found the significant repetition of the same studies, which we removed with the help of EndNote software. In this study, we extracted indicators from conceptual studies. We used NVivo 12 software for annotation and coding. For deciphering the nomenclature, indicators were disintegrated based on their lexical and conceptual structures as discussed in the Results section. For improving the result of coding, inter-coder reliability was applied on 10% of data and conflicts were resolved with the help of discussion.

The base of the cognitive structure of defined nomenclature in this study is the “every indicator is a contextual-function to explain the impact”. The primary constructs of an indicator are function and context. Function refers to the “correspondence”, “dependence relation”, “rule”, “operation”, “formula” or “representation” as defined by Vinner and Dreyfus (1989). It explains the relationship between the two domains “research” and “impact”. In other words, impact (y) is a function of research (x), i.e. y=f(x). At large, in scientometrics understanding, the functional operation can be “improvement”, “recognition”, “reduction”, “replacement” etc. (see Table 1 for examples). The indicator is a subjective measure of a system-dependent phenomenon which is always described in its contextual understanding by a system designer (Vinkler, 2010). Therefore, the indicator's function is always applied in a specific context. For instance, “improvement in patient care system”, in this indicator, the patient care system represents the context of the healthcare system, and it is critically important for researchers, funders, institutes and support organisations related to the health sciences (Trochim et al., 2011).

Structure of Indicator (I) = F + C

Where, F = Function, and C = Context

Whereas C = t + d

Where, t = target area, and d = impact domain

Nomenclature of Indicator with Examples.

Functions (F)
Improvement / Addition / ReductionThis function of indicator explains the addition or enhancement of an existing phenomenon in quantitative or qualitative form. (Example: Improvement in economic gains such as increased employment, health cost cut (Weiss, 2007))
CreationThis function of indicator focuses on the creativity in the form of the development of new knowledge, theory, technique, method, technology, approach, opportunity or any workflow. (Example: Creation of prevention methods for clinical practice (Trochim et al., 2011))
RecognitionThis function explains the recognition of effort in the form of outstanding quality by the peers or experts such as in the form of awards, promotions, meritorious selection and work showcasing etc. This recognition can be of the research, the researcher or the research institute. (Example: Receiving an award on research (Kuruvilla et al., 2006))
Obsoleting / ReplacingThis function elaborates the policy, law, regulation to obsolete or disuse the existing phenomena to overcome the future negative impacts. (Example: Change in law to obsolete the existing method of drug approval (Maliha, 2018))
Context (C)
Target (t)Contextual targets in research impact science include knowledge, service, policy, law, guideline, system, technology, procedure, method, framework, workflow, publication, patent, product, stakeholder, citation, literature gaps, intellectual challenges, scholarly issues, relationships, collaborations, and networks etc. These are the key areas but usually partial in contextual understanding.
Domain (d)The contextual domain is the main area or field of interest of the indicator system designer such as health, education, economy, environment, academia, medical science, chemistry, history, multidisciplinary etc. The main body of knowledge and elaboration of indicators are always from the domain language. The domain is the main component of the indicator, which specialised the context and application of the indicator. However, the level of the domain is subject to the interest and perspective of impact evaluator.
Results and discussion
Search outcome and identification of indicators

The result of the literature search is more than one thousand studies (1,152), where research impact was published in the form of theoretical papers, case studies and review articles. However, after excluding studies where research impact was assessed in case-studies by using any method developed by elsewhere, only 36 conceptual studies were left. In the conceptual studies, we found out more than 500 research impact indicators. For this study, we selected 119 indicators for preparing the nomenclature (see Appendix 1).

Nomenclature

In many cases, an indicator is self-explanatory and well written in a proper construct-based structure such as Development of mitigation methods for reducing environmental hazards and losses from natural disasters (Grant et al., 2010). However, similar to an algebraic expression, sometimes constructs are obscured but well understood by the users. For instance, in Number of citations, where, Function and the contextual domain is missing but well recognized as an Increased number of bibliometric citations, where, Function is the addition, the contextual target is citations, and the domain is bibliometrics.

This contextual nomenclature of indicators allows focussing on context and function irrespective of the selection of the words and lexical structure of the indicator. Additionally, it strengthens the idea of contextual generalisability, which is very helpful in extending the applications and scope of the indicators. For example, in use of research in the development of medical technology (where, Function = development/creation, Contextual Target = Technology, and Contextual Domain = Healthcare). This indicator can be generalised on the variable domain such as use of research in the development of technology (where, Function = development/creation, Contextual Target = Technology, and Contextual Domain = variable [generalized]).

Taxonomical analysis

In analyzed indicators, most of the indicators are functionally related to the improvements in the current state of affairs (63%), mainly focused on future research, services and methods (Figure 1). However, recognition of research (23%) in the form of bibliometric, rewards and other citations is also considerably highlighted in the literature-based list of indicators. Creativity and development (14%) are also the prevailing influence of research, which is reflected in indicators mentioning the creation of new knowledge, technique, research teams, drugs etc. More than half (59%) of the indicators attempt to explore the impact in the academic domain (Figure 2), e.g. Where and how the research is recognised? What knowledge, methods and collaborations are formed? What challenges, issues and gaps are addressed? Knowledge domains related to the social systems and services are second in coverage (26%) that primarily focus on the healthcare, education and justice systems. Economic policies and services also have a good share (11%) in literature-based indicators. Although, during the last two decades, the impact of research on improving the environment and sustainability has also emerged in various indicators, its representation is quite low.

Figure 1

Evidence-based taxonomical characteristics of indicators, (A) Scale of indicators, (B) Complexity of indicators, (C) Functions of indicators, (D) Domains of indicators, and (E) Target areas of indicators.

Figure 2

Cross-constructs distribution of indicators characteristics, (A) Functional distribution of target areas in indicators, (B) Domain distribution of target areas in indicators, and (C) Functional distribution of domains in indicators.

Limitations of the study

In this study, 119 indicators were interpreted and coded for nomenclature and taxonomy. However, the inclusion of more indicators may change the results of classification. Another aspect which may affect the outcome of the study is consistency in interpretation and coding of indicators. Although it was improved by using the intercoder reliability method on 10% indicators, rule-based text mining techniques may improve the results.

Conclusion and future direction

The study categorized the research impact indicators based on their characteristics and scope. Furthermore, a concept of evidence-based nomenclature of research impact indicator has been introduced to generalize and transform the indicators. For building nomenclature and classification, one hundred and nineteen indicators were selected and coded in NVivo software. The nomenclature was developed based on the principle “every indicator is a contextual-function to explain the impact”. Every indicator was disintegrated in three parts (essential ingredients of nomenclature), i.e. Function, Domain, and Target Areas. It is observed that in literature, the primary functions of research impact indicators are improvement, recognition and creation/development. The focus of research impact indicators in literature is more towards the academic domain, whereas the environment/sustainability domain is least considered. As a result, research impact related to the research aspects is considered the most. Other target areas include system and services, methods and procedures, networking, planning, policy development, economic aspects and commercialisation etc.

The study provided a novel approach in scientometrics for generalizability and transformability of research impact indicators. It explored the diversity of indicators and demonstrated the generalization based on fundamental constructs, i.e. function, domain and target area. As a result, a research impact indicator can be modified and applied to multiple research disciplines.

eISSN:
2543-683X
Idioma:
Inglés
Calendario de la edición:
4 veces al año
Temas de la revista:
Computer Sciences, Information Technology, Project Management, Databases and Data Mining