The “Norwegian Model” attempts to comprehensively cover all the peer-reviewed scholarly literatures in all areas of research in one single weighted indicator. Thereby, scientific production is made comparable across departments and faculties within and between research institutions, and the indicator may serve institutional evaluation and funding. This article describes the motivation for creating the model in Norway, how it was designed, organized and implemented, as well as the effects and experiences with the model. The article ends with an overview of a new type of bibliometric studies that are based on the type of comprehensive national publication data that the Norwegian Model provides.
The main goal of this study is to outline and analyze the Danish adoption and translation of the Norwegian Publication Indicator.
Design/methodology/approach
The study takes the form of a policy analysis mainly drawing on document analysis of policy papers, previously published studies and grey literature.
Findings
The study highlights a number of crucial factors that relate both to the Danish process and to the final Danish result underscoring that the Danish BFI model is indeed a quite different system than its Norwegian counterpart. One consequence of these process- and design differences is the fact that the broader legitimacy of the Danish BFI today appears to be quite poor. Reasons for this include: unclear and shifting objectives throughout the process; limited willingness to take ownership of the model among stakeholders; lack of communication throughout the implementation process and an apparent underestimation of the challenges associated with the use of bibliometric indicators.
Research limitation
The conclusions of the study are based on the authors’ interpretation of a long drawn and complex process with many different stakeholders involved. The format of this article does not allow for a detailed documentation of all elements, but further details can be provided upon request.
Practical implications
The analysis may feed into current policy discussions on the future of the Danish BFI.
Originality/value
Some elements of the present analysis have previously been published in Danish outlets, but this article represents the first publication on this issue targeting a broader international audience.
The purpose of this article is to describe the development, components and properties of a publication indicator that the Ministry of Education and Culture in Finland uses for allocating direct core funding annually to universities. Since 2013, 13% of the core funding has been allocated on basis of publication indicator that, like the Norwegian model, is based on comprehensive national level publication data that is currently provided by the VIRTA publication information service. In 2015, the publication indicator was complemented with other components of the Norwegian model, namely, quality-weighted publication counts based on national Publication Forum authority list of the publication channels with ratings established by experts in the field. The funding model allocates around 1.6 billion euros annually to universities with the publication indicator annually distributing over 200 million euros. Besides the funding model, the indicator provides comparable data for monitoring the research performance of Finnish universities, fields and subunits. The indicator may also be used in the universities’ local funding models and research management systems, sometimes even at individual level evaluation. Positive and negative effects of the indicator have been extensively discussed and speculated. Since 2011, the Finnish universities’ productivity appears to have increased in terms of both quantity and quality of publications.
The BOF-key is the performance-based research funding system that is used in Flanders, Belgium. In this paper we describe the historical background of the system, its current design and organization, as well as its effects on the Flemish higher education landscape. The BOF-key in its current form relies on three bibliometric parameters: publications in Web of Science, citations in Web of Science, and publications in a comprehensive regional database for SSH publications. Taken together, the BOF-key forms a unique variant of the Norwegian model: while the system to a large extent relies on a commercial database, it avoids the problem of inadequate coverage of the SSH. Because the bibliometric parameters of the BOF-key are reused in other funding allocation schemes, their overall importance to the Flemish universities is substantial.
This study aims to present the key systemic changes in the Polish book evaluation model to focus on the publisher list, as inspired by the Norwegian Model.
Design/methodology/approach
In this study we reconstruct the framework of the 2010 and 2018 models of book evaluation in Poland within the performance-based research funding system.
Findings
For almost 20 years the book evaluation system in Poland has been based on the verification of various technical criteria (e.g. length of the book). The new 2018 model is based on the principle of prestige inheritance (a book is worth as much as its publisher is) and is inspired by the publisher list used in the Norwegian Model. In this paper, we argue that this solution may be a more balanced policy instrument than the previous 2010 model in which neither the quality of the publisher nor the quality of the book played any role in the evaluation.
Research limitations
We work from the framework of the 2018 model of book evaluation specified in the law on higher education and science from 20 July 2018, as implementation acts are not available yet.
Practical implications
This study may provide a valuable point of reference on how structural reforms in the research evaluation model were implemented on a country level. The results of this study may be interesting to policy makers, stakeholders and researchers focused on science policy.
Originality/value
This is the very first study that presents the new framework of the Polish research evaluation model and policy instruments for scholarly book evaluation. We describe what motivated policy makers to change the book evaluation model, and what arguments were explicitly raised to argue for the new solution.
University College Dublin (UCD) has implemented the Output-Based Research Support Scheme (OBRSS) since 2016. Adapted from the Norwegian model, the OBRSS awards individual academic staff using a points system based on the number of publications and doctoral students. This article describes the design and implementation processes of the OBRSS, including the creation of the ranked publication list and points system and infrastructure requirements. Some results of the OBRSS will be presented, focusing on the coverage of publications reported in the OBRSS ranked publication list and Scopus, as well as information about spending patterns. Challenges such as the evaluation of the OBRSS in terms of fairness, transparency, and effectiveness will also be discussed.
The “Norwegian model” has become widely used for assessment and resource allocation purposes. This paper investigates why this model has becomes so widespread and influential.
Approach
A theoretical background is outlined in which the reduction of “uncertainty” is highlighted as a key feature of performance measurement systems. These theories are then drawn upon when revisiting previous studies of the Norwegian model, its use, and reactions to it, in Sweden.
Findings
The empirical examples, which concern more formal use on the level of universities as well as responses from individual researchers, shows how particular parts—especially the “publication indicator”—are employed in Swedish academia. The discussion posits that the attractiveness of the Norwegian model largely can be explained by its ability to reduce complexity and uncertainty, even in fields where traditional bibliometric measurement is less applicable.
Research limitations
The findings presented should be regarded as examples that can be used for discussion, but one should be careful to interpret these as representative for broader sentiments and trends.
Implications
The sheer popularity of the Norwegian model, leading to its application in contexts for which it was not designed, can be seen as a major challenge for the future.
Originality
This paper offers a novel perspective on the Norwegian model by focusing on its general “appeal”, rather than on its design, use or (mis)-use.
The “Norwegian Model” attempts to comprehensively cover all the peer-reviewed scholarly literatures in all areas of research in one single weighted indicator. Thereby, scientific production is made comparable across departments and faculties within and between research institutions, and the indicator may serve institutional evaluation and funding. This article describes the motivation for creating the model in Norway, how it was designed, organized and implemented, as well as the effects and experiences with the model. The article ends with an overview of a new type of bibliometric studies that are based on the type of comprehensive national publication data that the Norwegian Model provides.
The main goal of this study is to outline and analyze the Danish adoption and translation of the Norwegian Publication Indicator.
Design/methodology/approach
The study takes the form of a policy analysis mainly drawing on document analysis of policy papers, previously published studies and grey literature.
Findings
The study highlights a number of crucial factors that relate both to the Danish process and to the final Danish result underscoring that the Danish BFI model is indeed a quite different system than its Norwegian counterpart. One consequence of these process- and design differences is the fact that the broader legitimacy of the Danish BFI today appears to be quite poor. Reasons for this include: unclear and shifting objectives throughout the process; limited willingness to take ownership of the model among stakeholders; lack of communication throughout the implementation process and an apparent underestimation of the challenges associated with the use of bibliometric indicators.
Research limitation
The conclusions of the study are based on the authors’ interpretation of a long drawn and complex process with many different stakeholders involved. The format of this article does not allow for a detailed documentation of all elements, but further details can be provided upon request.
Practical implications
The analysis may feed into current policy discussions on the future of the Danish BFI.
Originality/value
Some elements of the present analysis have previously been published in Danish outlets, but this article represents the first publication on this issue targeting a broader international audience.
The purpose of this article is to describe the development, components and properties of a publication indicator that the Ministry of Education and Culture in Finland uses for allocating direct core funding annually to universities. Since 2013, 13% of the core funding has been allocated on basis of publication indicator that, like the Norwegian model, is based on comprehensive national level publication data that is currently provided by the VIRTA publication information service. In 2015, the publication indicator was complemented with other components of the Norwegian model, namely, quality-weighted publication counts based on national Publication Forum authority list of the publication channels with ratings established by experts in the field. The funding model allocates around 1.6 billion euros annually to universities with the publication indicator annually distributing over 200 million euros. Besides the funding model, the indicator provides comparable data for monitoring the research performance of Finnish universities, fields and subunits. The indicator may also be used in the universities’ local funding models and research management systems, sometimes even at individual level evaluation. Positive and negative effects of the indicator have been extensively discussed and speculated. Since 2011, the Finnish universities’ productivity appears to have increased in terms of both quantity and quality of publications.
The BOF-key is the performance-based research funding system that is used in Flanders, Belgium. In this paper we describe the historical background of the system, its current design and organization, as well as its effects on the Flemish higher education landscape. The BOF-key in its current form relies on three bibliometric parameters: publications in Web of Science, citations in Web of Science, and publications in a comprehensive regional database for SSH publications. Taken together, the BOF-key forms a unique variant of the Norwegian model: while the system to a large extent relies on a commercial database, it avoids the problem of inadequate coverage of the SSH. Because the bibliometric parameters of the BOF-key are reused in other funding allocation schemes, their overall importance to the Flemish universities is substantial.
This study aims to present the key systemic changes in the Polish book evaluation model to focus on the publisher list, as inspired by the Norwegian Model.
Design/methodology/approach
In this study we reconstruct the framework of the 2010 and 2018 models of book evaluation in Poland within the performance-based research funding system.
Findings
For almost 20 years the book evaluation system in Poland has been based on the verification of various technical criteria (e.g. length of the book). The new 2018 model is based on the principle of prestige inheritance (a book is worth as much as its publisher is) and is inspired by the publisher list used in the Norwegian Model. In this paper, we argue that this solution may be a more balanced policy instrument than the previous 2010 model in which neither the quality of the publisher nor the quality of the book played any role in the evaluation.
Research limitations
We work from the framework of the 2018 model of book evaluation specified in the law on higher education and science from 20 July 2018, as implementation acts are not available yet.
Practical implications
This study may provide a valuable point of reference on how structural reforms in the research evaluation model were implemented on a country level. The results of this study may be interesting to policy makers, stakeholders and researchers focused on science policy.
Originality/value
This is the very first study that presents the new framework of the Polish research evaluation model and policy instruments for scholarly book evaluation. We describe what motivated policy makers to change the book evaluation model, and what arguments were explicitly raised to argue for the new solution.
University College Dublin (UCD) has implemented the Output-Based Research Support Scheme (OBRSS) since 2016. Adapted from the Norwegian model, the OBRSS awards individual academic staff using a points system based on the number of publications and doctoral students. This article describes the design and implementation processes of the OBRSS, including the creation of the ranked publication list and points system and infrastructure requirements. Some results of the OBRSS will be presented, focusing on the coverage of publications reported in the OBRSS ranked publication list and Scopus, as well as information about spending patterns. Challenges such as the evaluation of the OBRSS in terms of fairness, transparency, and effectiveness will also be discussed.
The “Norwegian model” has become widely used for assessment and resource allocation purposes. This paper investigates why this model has becomes so widespread and influential.
Approach
A theoretical background is outlined in which the reduction of “uncertainty” is highlighted as a key feature of performance measurement systems. These theories are then drawn upon when revisiting previous studies of the Norwegian model, its use, and reactions to it, in Sweden.
Findings
The empirical examples, which concern more formal use on the level of universities as well as responses from individual researchers, shows how particular parts—especially the “publication indicator”—are employed in Swedish academia. The discussion posits that the attractiveness of the Norwegian model largely can be explained by its ability to reduce complexity and uncertainty, even in fields where traditional bibliometric measurement is less applicable.
Research limitations
The findings presented should be regarded as examples that can be used for discussion, but one should be careful to interpret these as representative for broader sentiments and trends.
Implications
The sheer popularity of the Norwegian model, leading to its application in contexts for which it was not designed, can be seen as a major challenge for the future.
Originality
This paper offers a novel perspective on the Norwegian model by focusing on its general “appeal”, rather than on its design, use or (mis)-use.