Acceso abierto

Performance-based Research Funding in Denmark: The Adoption and Translation of the Norwegian Model

   | 07 dic 2018

Cite

Background and motivation

Funding constitutes one of the main channels through which authority is exercised over research. Changes in the design of funding systems can accordingly be expected to have significant effects on the production of scientific knowledge (Whitley, Gläser & Engwall, 2010) and a detailed understanding of the design and effects of national research funding mechanisms is therefore vital (Aagaard, 2017). This is not least the case in relation to performance based research funding system (PBRFS) which during the latest decades have been introduced in more and more countries and which in most cases have been strongly contested (Hicks, 2012).

The Danish system is an interesting case in this respect. For at least four decades, a central issue on the Danish research policy agenda has been how to design a core funding system that not only takes student numbers and historical criteria into account in the allocation of resources. In line with general international trends, the funding of the Danish universities was from the post-WW2 years to the late 1970s almost totally dominated by core funding which initially were distributed equally between research and teaching assignments (Aagaard, 2017). However, with the ever-growing student uptake the political system became concerned with the fact that research priorities increasingly became side effects of policy decisions related to education. This led to a political demand for a more selective distribution of research funding. The first result of these discussions materialized in 1981 with the so-called budget reform which introduced a clear separation between funding for teaching and funding for research. On the teaching side the reform paved the way for performance-based indicators of educational activities, the so-called Taximeter-system, but on the research side it remained unclear how to replace student numbers and historical factors as key distribution criteria (Aagaard, 2011).

Despite continued discussions, further changes of the system were not implemented until the mid-1990s, where a minor corrective to the existing model was introduced after a lengthy and conflictual negotiation process. The new correctives took the form of a quantitative formula (known as the 50-40-10 model) which since 1997 has led to a marginal distribution of the core funding based on student activity, external funding and PhD-production (Aagaard, 2011; Schneider & Aagaard, 2012). Until 2010 this 50-40-10 model functioned on an ad hoc basis with significant year-to-year variations in the amount of money which was distributed. Hence, the universities did not know in advance, how much funding would be allocated and how the individual indicators were to be weighted. Surprisingly given the choice of indicators and the lack of transparency, the 50-40-10 model itself has rarely been debated although substantial amounts of money have been reallocated through this mechanism over the years (Aagaard, 2011).

The process leading to the adoption of the Norwegian model

The political perception that the existing Danish core funding system was functioning inappropriately became even more outspoken after the turn of the century. It was particularly highlighted as problematic that the distribution of core funding between the universities was based on a historically conditional distribution key—regardless of whether the quality and efficiency of the individual universities was high or low (Regeringen, 2005). It was therefore a central objective of the government to make sure that the core funding for research should be distributed based on “quality”, rather than on historical and quantity-oriented parameters—and that this “quality” should be systematically measured and evaluated (Regeringen, 2005). The intention was more precisely that by 2007 and onwards the universities should be assessed both on their teaching, research and knowledge dissemination activities. The assessment should be carried out by an international and independent panel and should be made public (Regeringen, 2005). These ideas were formally launched in the Globalisation Strategy presented in 2006, although the planned introduction of a new model was postponed one year—from 2007 to 2008 (Regeringen, 2006).

As a result of various consultations and internal discussions among policymakers, administrators and stakeholders, it was, however, relatively soon decided to aim for an indicator-based model rather than the proposed panel-based one. Already at this stage several key actors argued in favor of the Norwegian model as the one that would have the least adverse effects (Aagaard, 2011; Schneider & Aagaard, 2012). Hence, inspiration from the Norwegian model was included early on in the Danish process, but initially only as a limited element in several proposals of much more complex models (VTU, 2007d, VTU, 2008a-d). The complexity of the models was primarily the result of an ambition to cover all the activities of the universities. This meant that a large number of overall indicators were included, some of which had even more sub-indicators. Moreover, a number of the proposed indicators were quite controversial—not least in relation to the knowledge dissemination activities where it was difficult to see how the measured parameters could work in practice without creating unintended consequences. In addition to the problems of the high degree of complexity, there was also basic uncertainty regarding which problem a new model was supposed to solve, how much money it should redistribute, what activities it should cover, and how these activities should be weighted in relation to each other. Finally, the use of indicators in the proposed models seemed, in many cases, mainly to reflect what was available and administratively manageable rather than what the political system initially wished to create incentives for (VTU, 2007a-c; Aagaard, 2011).

Regarding the redistribution of funding, it was first proposed that all core funding should be distributed based on a new performance based model, but during the process the emphasis on this issue shifted from document to document (Aagaard, 2011). The perception was apparently that the questions about the design of the model and questions about the amount of funds that should be included, respectively, were independent of each other.

As a consequence of these problems, a number of key stakeholders grew increasingly sceptic during the initial phases of the process. To avoid the approaching deadlock it was—after almost two years of conflict ridden negotiations—instead suggested that the universities themselves should come up with an alternative model proposal (Aagaard, 2011). While this process also turned out to be challenging due to significant conflicts of interest between the research intensive and the teaching intensive universities (DTU, KU, & AU, 2008; CBS, AAU, & RUC, 2008), the institutions nevertheless managed to reach a compromise proposal which was presented in spring 2009 (Danske Universiteter, 2009). This proposal subsequently paved the way for the political decision which was taken June 30th, 2009—almost four years after the process was initiated (VTU, 2009). The final political agreement was based almost entirely on the proposal of the Danish Universities and took the form of an expanded 50-40-10 model, where the bibliometric research indicator (BFI) inspired by the Norwegian model came in as an additional element. Where the previous model had three indicators: education (50%), external research funding (40%) and PhD production (10%), the new model now had four: education (45%), external research funding (20%), PhD-production (10%) and the BFI (25%). The BFI, like the Norwegian model, was based on differentiated publication activity with two levels determined by a large number of field specific expert groups. Unlike the Norwegian model, the BFI, however, also included patents, doctoral and PhD-dissertations (the PhD-dissertations were later removed from the model again). Finally, as part of the reform it was decided that the indicator should only have funding consequences in relation to the distribution of “additional” core funding.

Differences between the Norwegian and the Danish model

Hence, on the one side The Danish BFI, which resulted from this elongated and conflictual process, was unambiguously inspired by the Norwegian model. But on the other, and this is much less recognized in both the public and the scholarly debate, the final Danish model design differs from its Norwegian counterpart in a number of decisive, but partly hidden, points. Hence, the two models, which from a superficial view look relatively identical, are in practice different in important respects. The process leading to the BFI thus meant that a number of solutions were chosen which to some extent violate the logic and transparency of the Norwegian model. In the following it is outlined how the Danish BFI deviates from the Norwegian model in at least four important respects. These relate to: 1) the lack of clear objectives; 2) uncertainty in relation to the redistributional effects of the model; 3) the choice of funding neutrality across the main scientific areas 4) the uncertainty related to the establishment of the documentation and data quality assurance system.

Lack of clear objectives

A fundamental problem related to both the process and the final result in the Danish case has been the lack of clear objectives with the introduction of the BFI. While the Norwegian model was designed to address specific Norwegian challenges, the purpose and rationale of the Danish model was contested and constantly shifting right from the beginning of the process. In addition, and contributing to this, the preparation of background material and underlying analyses was an incoherent, underprioritized and messy process. This lack of clear objectives and thorough preparation influenced the subsequent process in several ways. Firstly, the lack of clear objectives was a significant part of the explanation of the highly controversial process of designing and implementing the BFI which in turn resulted in a lack of legitimacy for the model as a whole. Secondly, the lack of clear objectives meant that Denmark ended up with a model that neither the political system, nor the research community really wished for—and a model which does not seem to address specific Danish challenges. While the Globalisation Strategy highlighted broader societally oriented factors as the most important ones to reward in a new model, agreement on how to measure such factors could not be reached. Hence, that the process ended up with a model with a strong emphasis on traditional academic publishing rather than knowledge exchange, collaboration and societal impact did not reflect a political wish, but instead a realization that it was the only possible solution as the process played out (Aagaard, 2011). It is, however, important to emphasize that the model not only is intended to work as an incentive model, but also as an accountability mechanism. From this perspective, the BFI can be characterized as a model enhancing transparency and broader legitimacy perspectives to the public at large in relation to the distribution of tax payer money.

Lack of clear incentives

A second difference relates to the incentive structures of the two models. This issue is crucial for the design of a model of this type, since the risk of “unintended effects” is closely linked to the degree of redistribution. Where the Norwegian model from the beginning was designed as a marginal redistribution mechanism, this issue was much less clearly articulated in Denmark. As outlined above this uncertainty characterized the design process, where very different proposals were launched—ranging from massive to marginal redistribution. It has however also characterized the process after the implementation, where the universities have had little chance of knowing how much money would be redistributed from year to year, as this amount both has been dependent on the infusion of new funds and by other mechanisms. Hence, the amount of money is not known in advance for the Danish universities, and this amount may in addition show significant fluctuations from year to year. The Norwegian bibliometric indicator, on the other hand, has consistently redistributed around 2% of the total funding for the university sector each year. There is thus a relatively predictable and marginal redistribution effect, making it possible for universities to navigate in relation to the model, while also maintaining financial space to pursue other important objectives. The actual development of the funding effects in the Danish case is outlined in section 4 of this chapter.

Main area funding neutrality

As part of the compromise between the Danish universities it was also decided that the Danish model—in opposition to the Norwegian—should be neutral in its re-distributional effects across the main scientific areas, meaning that funding should only be reallocated within the main areas and not across them (Danish Universities, 2010). This meant that the previous relative distributions of core funding between the main areas should also be the basis for the allocation of funding from the BFI. This choice, however, contradicts the intention of the Norwegian model of comparability across disciplines and thus goes against the rationale for using a universal publication indicator instead of for example citation indicators within the areas with high coverage in the bibliometric databases. Hansen (2009, 2011) points to a further unintended effect of the main area neutrality: The value of publication points differs from main area to main area. Thus, there is no direct correspondence between the main areas’ share of core funding and their distribution of publication points. This means that main areas with a larger share of publication points than core funding in fact receive a smaller grant per publication point than main areas, which have a smaller percentage of publication points than core funding (Hansen, 2009). This main area neutrality also means that the BFI becomes conservative and un-dynamic for the university sector as a whole as the ability to move funding between disciplines disappears. One could argue, however, that this was never the intention with the Norwegian model, either.

Documentation system and quality assurance

Finally, in relation to the Norwegian model, it was not only central to create an indicator that could stimulate to increased international publication activity and create a model which could be applied to the system as a whole. It was also a crucial condition for the implementation of the work that a unique research documentation system could be created in the same process, whether or not it was part of a redistribution mechanism. The main objective here was to ensure a high degree of transparency in relation to the sector’s academic output. However, the construction of a reliable documentation system and the establishment of quality assurance mechanisms in relation to publication data did not receive the same attention in Denmark. Such objectives were not given any particular weight throughout the Danish process and were not highlighted as a major reason for the introduction of the model. Hence, while data harvesting and calculation of points obviously indeed has been implemented, it has not worked without problems, and it has never become a real well-functioning documentation system with transparency and systematic quality assurance of data (Schneider & Aagaard, 2012).

Funding implications

As outlined in section 3.2., the amount of funding redistributed through the Danish BFI varies from year to year. Where the model’s economic reallocation effects in Norway have been well-known and relatively stable over time as mentioned in the previous section, the situation in Denmark has been quite different. It was a characteristic of the BFI in its first years that the actual redistribution effects were very modest, but also that the amount was not known for the universities in advance and that there were fluctuations from year to year which could not be foreseen. The latter two points are obviously far from appropriate in relation to an incentive model. In recent years, however, the trend has been moving very unambiguously towards more and more redistribution as table 1 below illustrates.

Development in the allocation of core funding (mio. DKK and %).

201020112012201320142015201620172018
Core funding (mio.DKK)7,9058,4438,5048,5928,5898,5938,5268,5278,527
Performance-based share3205946801,0451,1821,3261,4801,6562,090
Performance-based as percentage of total478121415171925
BFI (mio. DKK)80148.5170261.25295.5331.5370414522.5
BFI in percentage of total122334456

Source: Aagaard 2016

In 2010, the BFI only redistributed DKK 30 million for the sector as a whole. In 2011, this amount had increased to about DKK 75–80 million. Considering that these funds otherwise should have been distributed among the universities after the old 50-40-10 model, the actual redistribution on the basis of publication points was almost negligible. This has however changed quite drastically during the most recent years.

As presented above the Danish BFI is part of a broader performance-based mechanism. As shown in the table above, no less than a quarter of all core funding in 2018 is distributed according to this mechanism, and out of this the BFI has a weight of 25%. This means that approximately 6% of the total amount of core funding in 2018 will be distributed on the basis of BFI points. By comparison, the corresponding share in the Norwegian system from 2017 is only approx. 1.6%. This figure was for the Danish system 1% in 2010 and 2% in 2011 and 2012, while the percentage in 2018 will be as high as six times as high as in 2010 and almost four times as high as the current Norwegian level. The increase is driven by the fact that the already allocated basic research funds are reduced annually by 2%, which is then reinvested in the university sector via the performance model.

However, it is characteristic that this increasing redistribution has occured without any particular public discussion of possible consequences of this development. Given the fact that our actual knowledge about the real effects of the model is very limited, such discussions of consequences seem to be required. This is not least the case at a time with growing focus on the importance of both internal and external incentives for scientific misconduct and the spread of detrimental research practices. Viewed from this perspective, the combination of stagnant or declining total research funding level, very low success rates in all research councils and foundations, a high level of competition for positions from the postdoc level and beyond, as well as a general performance-based assessment and reward culture, increased weight on PBRFS might amplify unintended dynamics in the science system. All else being equal, we must expect that the greater the proportion of funding allocated to this type of mechanism, the greater the risk will also be that incentives may have inappropriate behavioral consequences at both institutional and individual level (Aagaard 2016). As we will return to in the next section, it is however not justified to single out the BFI as the sole driver of such unintended developments.

Experiences and effects

This article has identified and discussed the adoption and translation of the Norwegian model in a Danish context. In doing so, it has underlined the general observation that the development and use of PBRF systems are complicated and contested affairs. There seems to be no examples of national models which have functioned unproblematic and unchanged over longer periods of time (Aagaard, 2011; Hicks, 2012). In this context, the Norwegian model in fact stands out as one of the most well designed, stable and least problematic of the known PBRFS. From this perspective, and with the possibility of incorporating Norwegian experiences in national processes, one could imagine that the adoption of the Norwegian model in other countries could be a fairly straightforward exercise. The design and implementation of the Danish BFI is, however, a reminder that it is seldom easy to transfer models from one national policy context to another.

Most importantly, the article has highlighted a number of crucial factors that relate both to the Danish process and to the final Danish result, underscoring that the Danish BFI is indeed a quite different system than its Norwegian counterpart. One consequence of these process and design differences is the fact that the broader legitimacy of the Danish BFI today appears to be quite poor. The reasons for this lack of legitimacy can most likely be found in the following factors: 1) the preparation and the design and implementation process was not handled well by the central authorities; 2) the objectives of introducing such a Danish model have been unclear and shifting throughout the process and there has been limited willingness to take ownership of the model among stakeholders; 3) in addition, there has been a general lack of communication throughout the implementation process and an apparent underestimation of the challenges associated with the use of bibliometric indicators.

The use of a publication-based indicator such as the Danish BFI may still be defended, though, but if so, it should be based on a number of arguments that have almost been absent in the Danish debate so far. For example, it could be done by pointing out some of the potential positive effects of the Norwegian model, such as the possibility to create increased general awareness of publishing behavior at all levels and areas, the availability of a significantly improved national publication database, and not least the provision of greater visibility of the academic production of the humanities and the social sciences.

The future of the Danish BFI

The publish or perish phenomenon is by no means new, but there are indications that researchers today are perceiving a stronger pressure than previously—although such a publication pressure may differ depending on where you are in your career, what field you are in etc. But rather than seeing systems such as the Danish BFI as the main cause of this pressure, it is probably more reasonable just to perceive the model as a symptom of stronger underlying dynamics. It therefore appears both right and wrong when critics have expressed concern that the incentive structure in the BFI alone leads to inappropriate behavioral changes in relation to the values and norms that apply to good scientific work and in relation to the versatile tasks that the universities generally are expected to solve in the Danish society. Such general concerns on the one hand appear justified, but on the other hand, the pressures can hardly be attributed to BFI alone. Thus, the problem will hardly be solved by simply abandoning the indicator model. At the time of writing, however, the future of the BFI is highly uncertain. An expert committee has been commissioned to come up with new proposals, but so far no clear alternatives to the Norwegian model have materialized.

eISSN:
2543-683X
Idioma:
Inglés
Calendario de la edición:
4 veces al año
Temas de la revista:
Informática, Tecnologías de la información, Gestión de proyectos, Bases de datos y minería de datos