Accesso libero

Fostering the data welfare state: A Nordic perspective on datafication

INFORMAZIONI SU QUESTO ARTICOLO

Cita

Introduction

The Covid-19 pandemic has accelerated the drive to large-scale digitalisation, which has been underway for decades. Throughout the global emergency, digital tools for everything from health to education have been rapidly introduced, replacing physical meetings and supporting social distancing measures. As a consequence of this “digitalisation on steroids”, an increasing number of aspects of our identities, practices, and societal structures have been transformed into data. From 2018 to 2021, we led a Nordic exploratory research network on “Datafication, Data Inequalities and Data Justice”, under NordForsk NOS-HS (Joint Committee for Nordic Research Councils for the Humanities and the Social Sciences), which aimed to outline the specifics of datafication in the Nordic countries. In this article, we ask whether we are witnessing the emergence of a data welfare state in the Nordic countries, and if so, how it might be characterised.

Through the analysis of empirical examples in three important areas of the welfare state – namely automated decision-making within employment services, data-driven methods within public service media, and the digitalisation of the corrections sector – we aim to draw attention to some of the problems inherent in increased datafication in the Nordic welfare states, including issues of blurring of public and private sector, lack of transparency, lack of diversity, and bias in data. We conclude with a call to develop specific measures and policies to enable the development of the data welfare state.

Any quotations and excerpts cited in this article not originally in English have been translated by us.

A Nordic version of datafication?

In the digitalised world, vast amounts of data are gathered automatically from our everyday activities, including shopping, travel, media consumption, and engagement with social media. We are living in an age of “infoglut” and “datafication” (Andrejevic, 2013; van Dijck, 2014), in which our feelings, identities, and affiliations are tracked and analysed. Datafication refers to the process of using data – mainly from digital environments – to understand sociality and social behaviour. Our increasing ability to generate and make sense of ever-larger quantities of data has also been described as the “industrial revolution of data” (Milan & van der Velden, 2016).

While the digital refers to numbers, and data that can be communicated via numbers, datafication describes the process whereby numbers are turned into datasets that, collectively, provide information about behaviour. van Dijck (2014: 198) describes datafication as a “transformation of social action into online quantified data”; Andrejevic (2020) frames it as an automation of subjectivity and knowledge; and Couldry and Mejias (2019: 1) call it “a quantification of the social”. While many scholars laud datafication as a new field of social science, authors like van Dijck, Andrejevic, and Couldry and Mejias pose more critical views. Specifically, they underline that datafication is supported by the ideology that data can provide more accurate and nuanced information about human behaviour, and therefore can define and predict behaviours. van Dijck (2014) calls this ideological assumption dataism. Central to dataism is the belief that data is neutral, quantifications are objective, and there is “a self-evident relationship between data and people, subsequently interpreting aggregated data to predict individual behaviour” (van Dijck 2014: 199). Metcalfe and Dencik (2019) note that this (ideological) understanding of a correlation between data and behaviour stokes the belief that data can effectively predict future activities, consumption, health, and risk, and thereby defines prediction as the primary goal of data collection.

In this context, the Nordic welfare states are characterised by a high degree of public trust in institutions, underscoring citizens’ general acceptance of a very high level of data collection. In the Nordic countries, a large amount of data is already available on all citizens – from newborns to seniors. Specifically, information on health, education, employment, tax, crime, and other matters are linked to individuals via their CPR number (Denmark), personal number (Sweden), and social security number (Finland), registering and documenting their engagement with both public and private sectors (Ustek-Spilda & Alastalo, 2020). While the data are not cross-referred – as it is (currently) illegal for one institution (e.g., a health authority) to share data with another (e.g., the police) – the vast and growing mass of data holds promise for large-scale digitalisation.

Currently, the Nordic countries are eager to use digitalisation to streamline public administration and the provision of welfare. This involves not only digitising physical files, but also digitally automating decision-making. In the following, we illustrate how the ideology of dataism is appropriated in different datafication initiatives related to the Nordic welfare states. As Alfter (2020) notes, Denmark aims to become a global leader in digitalisation; to achieve this end, it intends to incorporate digitalisation into all new legislation. In 2019, the Danish Agency for Digitalisation [Digitaliseringsstyrelsen], within the Ministry of Finance, launched a national strategy for artificial intelligence (AI) [National strategi for kunstig intelligence] with the following aims: “The public sector will use artificial intelligence to offer world-class service [and] artificial intelligence [will be used] to support a faster and more efficient handling of cases” (The Danish Government, 2019: 10). The Danish strategy, aiming to “[lead] in Europe in the implementation of data and artificial intelligence to improve and target the public service”, is built on optimism and hope (The Danish Government, 2019: 10). The mentioned benefits of AI include: “more personal treatment” of citizens; better support for citizens’ cases; higher quality administration of resources; faster and more accurate diagnosing; more efficient and effective administration; and effective systems to fight tax fraud and social benefits fraud (The Danish Government, 2019: 11). In other words, the strategy aims to improve the quality and efficiency (i.e., cost) of public services.

Similarly, in 2017, Finland launched a national AI strategy, the AuroraAI programme, to improve public services and competitiveness. AuroraAI aims to combine all public organisations under one network, facilitating interaction and data exchange between services and platforms. According to the national AI strategy, Finland is in an excellent position to produce “the world's best services in the age of artificial intelligence” (Ministry of Economic Affairs and Employment, Finland, 2017: 14). The goals of AuroraAI are similar to the ones described in the Danish strategy: the programme will provide “smoothly running daily life” as it automatically interconnects different services, breaks down silos in the service sector, and promotes cost-efficiency. The vision entails that AI operates smoothly and efficiently across the whole public service sector (Ministry of Finance, Finland, 2021).

Like Denmark and Finland, Sweden has set an ambitious agenda to become world-leading in AI development and use (Government Offices of Sweden, 2018). The National Approach to Artificial Intelligence published in 2018 proclaims in the introduction that “Sweden aims to be the world leader in harnessing the opportunities offered by digital transformation. By international standards, Sweden is in the vanguard” (Government Offices of Sweden, 2018: 4). At the same time, it will take resources and much effort to live up to the expectations and keep this leading position. The Swedish approach includes the idea that “there is a great potential in the public sector to develop activities and public services in the citizens’ interest with the help of AI. It is therefore in Sweden's interest to stimulate innovative applications and use of AI in society in various ways” (Government Offices of Sweden, 2018: 8).

For all three countries, these stated aims are quite telling of the ways in which dataism is frequently entangled with the ideals of efficiency and improved decision-making. One possible basis for the agencies’ optimism towards the potential for AI to improve public services is their (naïve) understanding of algorithms as neutral and objective, mirroring dataism's ideology of data as neutral and objective (van Dijck, 2014). As illustratively phrased in the Danish strategy, algorithms are believed to safeguard justice: “The algorithms will secure equal treatment by being objective, unbiased and independent from personal conditions” (The Danish Government, 2019: 7).

However, technological designs, including algorithms, are not neutral. Critical design theory (e.g., Drucker, 2011; Kannabiran & Petersen, 2010; Sun & Hart-Davidson, 2014), critical data studies (e.g., Andreassen, 2020; Eubanks, 2017; Iliadis & Russo, 2016; Noble, 2018; van Dijck, 2014; van Dijck et al., 2018), and data justice scholars (e.g., Andreassen, 2021; Dencik et al., 2018; Metcalfe & Dencik, 2019) argue that design and programming are always intertwined with values and ideologies. Context, norms, and values not only influence designs, but also the affordances and algorithms that go into – and constitute – those designs (see Buolamwini & Gebru, 2018; Constanza-Chock, 2020). In their analysis of design biases, Kofoed-Hansen and Søndergaard (2017) describe how a designer's wish to improve existing conditions is always influenced by the ideological trends of their contemporaries, even when the designer is not conscious of these trends. Having outlined the specific imaginaries connected with datafication in the Nordic countries, we now move to consider the concept of the Nordic data welfare state.

From the media welfare state to the data welfare state?

The extent to which scholars should contextualise processes of digitalisation and datafication (which are often described in universal terms, following the logic of large, global corporations) remains an unanswered question. Nonetheless, in this article, we enquire into the specificities of the Nordic welfare states, highlighting the legal frameworks and historical trajectories of institutional trust that must be considered in any exploration of datafication in the Nordics. In order to do so, we develop the notion of the data welfare state, where we rely on media scholars Syvertsen and colleagues’ (2014) conception of the media welfare state. The media welfare state refers to a special model of media systems in the Nordic countries, with four pillars: 1) universal access to information through communication systems such as postal services, telecommunication networks, and printed and audiovisual media; 2) editorial freedom, referring to a range of measures used to safeguard editorial independence from state interference; 3) content diversity, with extensive cultural policies seeking to ensure the provision of alternative (domestic and minority) content, while diminishing the influence of global market forces; and 4) durable and consensual policy solutions. It has been discussed whether the media welfare state should be considered as an ideal or a reality, and that political and social changes have led to the emergence of the neoliberal media welfare state instead (Jakobsson et al., 2021). Furthermore, in increasingly datafied Nordic societies, it is reasonable to ask whether the pillars of the media welfare state are changing or need to be adapted. Accordingly, instead of the media welfare state, we are interested in how the contours of a data welfare state might look. Importantly, the media welfare state consolidates democracy and trust in the state. If a data welfare state were to do the same, the four pillars outlined by Syvertsen and colleagues (2014) would require adaptation: 1) justice and non-bias in processes of datafication; 2) decommodification, that is, freedom from commercial logic; 3) data diversity acknowledging different needs of citizens and residents; and 4) transparency on the datafication process providing sustainable and meaningful information for citizens and residents (see Table 1).

Adaption of four pillars of the media welfare state to the data welfare state

Media welfare state Data welfare state
universal access to information justice and non-bias in processes of datafication
editorial freedom decommodification
content diversity data diversity
durable and consensual policy solutions transparency and sustainability

Comments: The pillars of the media welfare state were formulated by Syvertsen and colleagues (2014).

Coming from a slightly different viewpoint, with a capabilities approach in mind, Taylor (2017) has introduced three pillars of data justice that partly coincide with the four pillars above. Drawing on theorisations of data justice (Dencik et al., 2018; Heeks & Renken, 2016; Johnson, 2014), Taylor's three pillars address (in)visibility, digital (dis)engagement, and countering data-driven discrimination. Although framed differently, these point to similar issues as our data welfare state framework: visibility refers to access of representation and diversity; engagement indicates autonomy in technological choices covering issues of decommodification and transparency; and countering discrimination is compatible with seeking non-bias. While Taylor offers her framework in a broad global setting, our approach is grounded specifically in the context of the Nordic welfare state. In what follows, we look more closely at different areas of society in three Nordic countries to discuss how the pillars of the data welfare state are emerging or are being challenged.

Data-driven welfare in the Nordic welfare states: Danish unemployment, Finnish public service media, and Swedish smart corrections

As the Nordic AI strategies above illustrate, the welfare state is considered an important site for datafication and algorithmic automated decision-making (ADM) systems. Below, we discuss three Nordic projects that illustrate how the ideology of dataism can become entangled with welfare provision. Although there are important differences between the three Nordic countries considered here, they are each considered representative of the social-democratic welfare regime, according to Esping-Andersen's (1990) seminal classification. Social-democratic welfare states are characterised by the principles of universalism and decommodified social rights, as well as the promotion of equality and high living standards for all. In the following, we draw on three examples to analyse the status of the Nordic data welfare state. Based mainly on documentary analysis, as well as background interviews, we first discuss ADM in the welfare sector, specifically employment services; second, we engage with the datafication of public service media; and third, we explore the datafication and subsequent automation of the corrections sector.

Automated decision-making in the Danish employment services

A relatively new Danish labour and unemployment law [Beskæftigelsesindsats], passed in 2019, involves an ADM system that uses data profiling to assist state social workers in their efforts to find work for unemployed citizens. Although the system is not explicitly labelled ADM or AI, but rather “a national digital tool for clarification and dialogue” (Retsinformation, 2019: §8.2), the tool nevertheless represents an attempt to digitally assess job-seekers and predict their success on the job market. The tool is based on an ADM pilot project that ran from 2016–2018 (Motzfeldt, 2019). While it is not clear whether the current tool is identical to that of the pilot project, or whether it represents a new and updated version, scholars agree that the pilot project served as the primary source for this new “ADM unemployment system” (Andersen, 2019; Motzfeldt, 2019).

The goal of the pilot project was to identify unemployed citizens at risk of long-term unemployment via a “profile clarification tool” [afklaringsværktøj]. This tool operated in two stages: In the first stage, unemployed citizens filled out a survey indicating their own evaluation of their situation – including their expectations of when they would find employment, as well as descriptions of their personal situation – in case this was relevant to their search for work. In the second stage, ADM was used to evaluate each job-seeker based on a number of predefined categories, gathered from different institutions and public platforms (relating to, e.g., education, work experience, age, ethnicity, and welfare benefit history). These categories were defined as “objective” and “observable”, while participants’ own evaluations were considered “subjective” (The Danish Parliament: 212; Mploy, 2018: 7). The combination of data across both stages generated a score, indicating whether the job-seeker was at risk of long-term unemployment. According to this score, the job-seeker was assigned services from a local job centre.

In her book Automating Inequality, Eubanks (2017) discusses the consequences of grounding social welfare policies on data prediction, on the basis of her analyses of ADM-driven social services in the US. Echoing van Dijck's (2014) conceptualisation of dataism, Eubanks criticises the assumed relationship between large population data and individual behaviour, and she describes how algorithms aimed at predicting individual needs are designed according to social group. As a result, scores used to predict individual needs are not based on individuals and their personal histories, but rather on the data mining of (pre-defined) social groups related to ethnicity, gender, civil status, neigh-bourhood, and other demographic factors. In other words, individuals are not impacted by their own actions or personal situations, but by the previous actions of “members” of their “categorical belonging”. While Danish law defines social categories (e.g., age, ethnicity, citizenship status, education, etc.) as “objective” (The Danish Parliament, 2019), Eubanks demonstrates how the history of poverty and ideas of marginalised groups as less valuable citizens play into the design of algorithm-based social policies; this is especially clear in the design of the social categories used to predict individual risks or needs, and thus the level of assistance granted by the state. She argues that, rather than facilitating new and neutral social policies, “digital tools are embedded in old systems of power and privileges” (Eubanks, 2017: 178). Such embeddedness might explain the mixed results of the Danish unemployment services pilot project: Of the sixteen municipalities that participated, nine found the ADM screening system to be effective in identifying those at risk of long-term unemployment (Mploy, 2018); the remaining seven municipalities experienced the ADM system as stigmatising, and thus undesirable for facilitating dialogue with unemployed citizens (Mploy, 2018: 32). These ambiguous results indicate a challenge in fulfilling the first pillar (justice and non-bias in processes of datafication) in the data welfare state.

All of the participating municipalities in the pilot project concluded that the ADM tool could not determine the risk of long-term unemployment independently; rather, output from the tool needed to be interpreted alongside social workers’ professional judgements. In other words, in order to maintain a transparent and sustainable data welfare state (the fourth pillar), as well as a just and non-biased data welfare state (the first pillar), the ADM tool needs to be continuously combined with professional judgements provided by “real people”. Importantly, some job centres pointed to situations in which the ADM disagreed with social workers’ evaluations (Mploy, 2018). In these cases, the social workers problematised that they could not locate the basis for the discrepancy; that is to say, they could not see the calculation behind the ADM algorithm. This points to the “black box” of ADM programming, whereby users do not understand or see the programming that feeds into a particular score or prediction. While we, as consumers, are used to ADM recommendations that do not fit our desires (e.g., suggestions from streaming services that do not capture our interest) (Motzfeldt, 2019), the same “mal matching” can have severe consequences when incorporated into social services. As consumers, we can choose to not accept ADM recommendations; but as citizens in a social system, we have fewer options to reject ADM-determined suggestions or solutions. The lack of transparency and sustainability (the fourth pillar) in the data welfare state is therefore much more critical when considering the determination of welfare provisions than it is in the provision of entertainment options.

Another reason why the ADM prediction differed from social workers’ evaluations in some instances might be that, in line with the underlying assumptions of dataism, the ADM tool equated correlations with causality (Cheney-Lippold, 2017). In the pilot project, knowledge about unemployed citizens was reduced to correlations, whereas knowledge held by social workers was based on an observed understanding of causal relationships between various social categories (Antczak & Birkholm, 2019). Accordingly, we argue that the term “artificial intelligence” should be replaced with that of “automated decision-making”. While AI and machine learning have become the assumed hallmarks for a prosperous future, we question the assumed intelligence of datafication and stress that AI is best characterised as a future imaginary.

Motzfeldt (2019) warns that the use of ADM tools in the welfare state will most likely be expensive. While such systems are often initiated to save time and replace (salary demanding) human staff, the pilot project described above underlines that ADM cannot be a complete substitute for human staff, and it is therefore likely to add cost on top of existing salaries. Importantly, experiences with ADM-driven social services point to how professional staff tend to prioritise the ADM-score instead of their own judgement (Antczak & Birkholm, 2019; Eubanks, 2017; see also Oak, 2015); this risk seems heightened in contexts where resources are few, and there is little time to determine the reason behind the discrepancy between an ADM-score and one's own evaluation.

The datafication of public service media

Public service media (PSM) represent an important component of the welfare state. PSM are largely financed through public funding, with the remit to serve the public interest. They follow and express the ideals of the welfare state – namely equality and universality – in terms of access to information and representation. Changes in media logics related to datafication are aptly captured by Andrejevic (2020), who describes a shift from mass media to automated media. Automated media operate through ADM, platforms, and utilisation of data, with a variety of consequences for mutual recognition, collective deliberation, and judgment. At their core, automated media infrastructures based on datafication propel and shape media logics and content.

The datafication of media can be understood as a “political economic regime”, whereby the accumulation and analysis of data determines new ways of doing business and governance (Sadowski, 2019). This has several consequences for media operation, including the “platformisation” (Gillespie, 2010; Helmond, 2015) of the media environment, dominated by commercial platforms such as Google, YouTube, Twitter, Facebook, and Instagram. According to van Dijck and colleagues (2018), a platform is fuelled by data, automated and organised by algorithms and interfaces, formalised through ownership, and governed through user agreements. Platformisation is but one example of how communication and sociality have become formatted to enable datafication. As Andrejevic (2020) argues, automated systems promise to augment or displace the human role in communication, information processing, and decision-making. The implication of this is that mental labour, thought processes, evaluation, and judgement must be standardised and formatted, similar to how physical labour in factories was replaced by machines.

As a result of their datafied infrastructure, media now operate in drastically different ways, using detailed audience data to customise services. While data-driven media promise more accurate services and efficient operations, they are also highly problematic in terms of the normalisation and capitalisation of surveillance and the de-skilling of comprehension (Turow, 2011; Zuboff, 2019). Additionally, the datafication of media endangers a fundamental role of media in society: to foster solidarity and civic mindedness (Andrejevic, 2020; Couldry, 2012; Nikunen, 2019). In what follows, we discuss how the datafication of the media environment affects the principles and pillars of the Nordic data welfare state.

Nordic PSM have traditionally been a strong and important part of the so-called media welfare state (Syvertsen et al., 2014). Nordic PSM prided themselves on their early move towards digitalisation; however, they now find themselves surrounded by commercial platforms owned by tech companies. Though sharing PSM content on social media platforms appears necessary to reach audiences – particularly young audiences (Andersen & Sundet, 2019) – Nordic PSM reluctantly do so, as a side effect of this sharing is the involuntary support of commercial platforms, which blurs the boundaries of the public and private sector and constitutes a threat to the second pillar (decommodification). For this reason, and to maintain ownership and control, some Nordic PSM have adapted to datafication by creating their own platforms, collecting data, and using customised services (Hokka, 2018; Moe, 2013).

Most Nordic PSM employ audience metrics, user profiling, or algorithms to create content, customise distribution channels, and generate recommendation systems. In line with their early adoption of digital technologies, YLE (Finland), DR (Denmark), SVT (Sweden), and NRK (Norway) considered data-driven personalisation services highly relevant to their strategy (Andersson Schwarz, 2016; Van den Bulck & Moe, 2018). But how does data-driven personalisation affect the non-bias of media content, in other words, the universal principle of PSM? PSM justify the use of data-driven customised services as a “new universality”. However, personalised recommendation systems may also erode universalism and the idea of a shared public sphere (the first pillar of the media welfare state) by categorising and profiling audiences. In particular, recommendation algorithms allow audiences to filter out content and viewpoints they are not interested in, thereby creating “filter bubbles” (Pariser, 2011). The resulting customised – and therefore fragmented – media environment erodes the experience of an imagined community (Andersson, 1983), which is central to solidarity, trust, and civic mindedness. In other words, the infrastructures and architectures of digital media emphasise individual tastes and personalised brands over collectiveness and community (the first and third pillars of the media welfare state).

To counter these polarising tendencies, PSM have integrated “public service algorithms” into their recommendation processes (Beckett, 2020; Bennett, 2018, Nikunen & Hokka, 2020). For example, YLE uses algorithms that seek to diversify, rather than polarise, media use (Nikunen & Hokka, 2020).

There are some implications that data-driven personalisation can also be used to better foster diversity of audiences. Through personalisation, PSM have been able to recognise new audiences (Hokka, 2018) instead of addressing the abstract average citizen, who is unlikely to belong to a minority; however, data-driven personalisation may create new forms of marginalisation and groups of discrimination (Mann & Matzner, 2019). Commercial platforms such as Netflix have found a global niche in audiences who are interested in content produced by or representing ethnic and racial minority groups as well as gendered and sexual minorities. Nationally, these audiences may be small, when considered globally, they are substantial. While catering to these audiences is important, the fact that it is driven by commercial interests, rather than by public values, tends to put emphasis on sensational and trendy elements that further marginalise those already in the margins (Saha, 2018). These challenges exemplify the entangled nature of the first and the third pillars of the data welfare state in the context of media, where the question remains of how to ensure non-bias in data-driven systems and take into account the different needs and aspects of diverse society.

Andrejevic (2020: 49) suggests that, instead of focusing on how algorithms affect content discovery on media platforms, we should attend to the ways in which the “combination of platform logics and communicative practices with broader social policies undermines the conditions for democratic deliberation” (see also Baum & Groeling, 2008; Campbell, 2018; Pariser, 2011). For many, such a consideration culminates in a vision of the public sphere guided by commercial profit rather than public values. Indeed, datafication has intensified the entanglement of public and private media infrastructures, which takes us to the data welfare state's second pillar of decommodification: The domination of global platforms has given rise to a media ecosystem in which virtually all media platforms are dependent on the infrastructural services of global tech giants (van Dijck et al., 2018). As this applies to PSM, the entanglement with data-driven media platforms risks undermining the public's trust. They also constitute a risk to sustainability (the fourth pillar) – as management and responsibility for the platform are outside the control of individual countries – as well as data diversity (the third pillar), as the data (and control with data) is gathered on very few platforms. While many European PSM allow for some private funding and advertising on their websites, Nordic PSM have tried to maintain independence (Sørensen & Van den Bulck, 2020). As discussed, some have developed their own platforms (YLE Areena in Finland, SVT Play in Sweden, NRK TV in Norway, and DR TV in Denmark); however, to be discovered by potential users, they must also appear on set-top box interfaces and third-party software (e.g., Apple TV, PlayStation, ElisaViihde [Finland], Strim [Norway], Tv Hub [Sweden], and YouSee [Denmark]). Whenever PSM content and platforms become embedded in these interfaces, they must submit to their hosts’ datafied logics, marketing, and commercialisation; this challenges the second pillar (decommodification) of the data welfare state.

As very little is known about the data traffic in these contexts, transparency (the fourth pillar) is seriously challenged. Moreover, some Finnish, Swedish, and Danish PSMs are connected to third-party servers that collect user data for advertising or data management purposes (Sørensen & Van den Bulck, 2020). While there is no evidence that audience data is gathered by these servers, it is nonetheless clear that PSM are significantly integrated into digital business networks, and it is highly problematic for public trust if PSM user data is tracked and sold to third parties. In addition, data gathering on PSM remains modest compared with that of the tech giants; thus, PSM must seek new collaborations to access more data. These collaborations with commercial data-driven platforms further blur the line between public and private media, directly affecting the second pillar (decommodification) of the data welfare state.

In Sweden, public service radio has integrated the commercial music streaming service Spotify – a phenomenon referred to as the “Spotification” of public service media (Burkart & Leijonhufvud, 2019). Furthermore, some PSMs (e.g., NRK) co-produce content with Netflix (Sundet, 2017). While datafication is embraced in PSM news services as an efficient way to reach and serve audiences, it is simultaneously seen as highly problematic for the PSM principles of universalism and equality (the first pillar). Growing dependence and entanglement with commercial digital platforms generates uncertainty over data practices and trust. In recent years, Nordic PSM have paid more attention to the transparency of their data practices and the independence of their platforms. This reflects their strong investment in data-driven technologies and the challenges that these systems entail for PSM. As illustrated by the ADM unemployment services case, algorithmic operations represent trade secrets, and data collection practices are often hidden from the public. To serve public values, PSM must maintain transparency (the fourth pillar) in their operations and ethics in regards to their data practices.

It seems PSM experiences various challenges in upholding the four pillars of the data welfare state. Particularly the second pillar of decommodification appears to be difficult to foster in the current platformed media ecosystem. The effects of blurring the boundaries between the public and private sector are illustrated in the ways in which PSM have adopted the logics of commercial platforms with practices that challenge non-bias and diversity principles (the first and third pillars), as well as undermine sustainability and transparency (the fourth pillar) with increasing dependency on powerful platforms.

The smart prison: Datafication of corrections

In the Nordic setting, corrections are part of the public sector – aiming to rehabilitate and resocialise individuals – and are often viewed as Scandinavian exceptionalism (Pratt, 2008; Pratt & Eriksson, 2012). Similar to other areas of the welfare state, the corrections sector is increasingly implementing digital technology in order to become more efficient, particularly with respect to decision-making. In the Swedish context, the aim to “smartify” corrections – in part, through datafication – has materialised in the Krim:Tech initiative, which the Swedish Prison and Probation Service [Kriminalvården] launched in 2018. The main aim of Krim:Tech is to gather and recruit technology developers to renew and digitalise work with incarcerated individuals (Kaun & Stiernstedt, 2020). The Swedish Prison and Probation Service describes this in the following terms:

Krim:Tech is the new digitalisation initiative by The Swedish Prison and Probation Service. With the help of the latest technology and research, the initiative will support the development of new and improved digital solutions within the authority. Krim:Tech is an inventor's workshop and test bed for digital technology. Does an ankle monitor actually have to be an ankle monitor or could it be something else instead? How can we use IT to keep our security class 1 facilities calm? How can we prevent children and families who are visiting their father or mother in the prison from becoming afraid? Can we do security scans with a toy instead of metal detectors and full body scanners?

(The Swedish Prison and Probation Service, 2018)

This description of Krim:Tech reinforces the idea of renewing the entire organisation – including incarcerated individuals – with the help of smart data-based technology, while simultaneously considering the prison context as a test bed for new technologies (Kaun & Stiernstedt, 2021). This idea of renewal and even reinvention is emphasised in an unpublished policy document, shared with the authors, which carves out a digital agenda for The Swedish Prison and Probation Service. According to this agenda, there are five ambitions for smart technology within the sector: leveraging digital resources to overcome social isolation, preparing prisoners for life in the digital society, increasing efficiency through digital resources, preempting recidivism through digital resources, and striking an appropriate balance between security and the use of digital resources.

Besides the larger visions expressed by Krim:Tech and the digital agenda, there are also specific projects and implementations of digital technology in The Swedish Prison and Probation Service that illustrate particular aspects of datafication. One example, which is used to support probation services, is an application called Utsikt [View, Prospect, or Outlook] (Kaun & Stiernstedt, 2020). This tool, representing the first of its kind, was developed by Krim:Tech in 2015, with support from the state agency for innovation (Vinnova) (The Swedish Prison and Probation Services, 2015). During a trial period in early 2017, a group of 19 individuals on probation tested the application and provided feedback for improved functionality (The Swedish Prison and Probation Services, 2017). According to the project leader, Lena Lundholm, the application was designed to improve attendance at probation meetings and provide clients with preemptive exercises derived from cognitive training in challenging situations. Accordingly, the application includes breathing exercises for stressful encounters and links to hotlines for support during critical episodes. Furthermore, the tool enables users to track their moods and provides them with different scenarios for problem-solving.

Descriptions of the application emphasise that it is merely meant to prevent recidivism and not to control or supervise clients. Use of the application, which is available as a free download, is voluntary, but it is thought to complement the work of probation officers. The application requires an iPhone 5 or Android version 5 (or later operating system), as well as an App Store or Google Play account. These preconditions are potentially challenging for some clients, especially those who have recently been released from long prison sentences (Jewkes & Reisdorf, 2016); this lack of access potentially challenges the first pillar of justice and non-bias in the processes of datafication.

Together with a probation officer, users input data on their rehabilitation process and support, as well as control measures, into the application. All data stored in the application are inaccessible by Prison and Probation Service officers. In fact, the only link between The Swedish Prison and Probation Service and the application is the automatic synchronisation of meetings via the calendar function. Beyond this, no data are saved, and it is the responsibility of the users themselves to back up their personal information. However, it is conceivable that, in the future, data stored in the application could be used to predict users’ recidivism or critical moments. This can threaten the transparency and sustainability (the fourth pillar), if users do not know how the information they insert into the application is used to predict future risks of recidivism.

Concern for the secure treatment of data is made very explicit in the application's promotional material. However, datafication concerns not only the treatment and usage of data, but also the transformation of complex processes (i.e., rehabilitation) through datafication; in the case of this application, this transformation involves the differentiation of rehabilitation into distinct periods, tasks, and risks to be mitigated, leaving little room for the user or probation officer to navigate. This highlights a similarity with automated decision-making in the employment services and the use of data-driven methods in PSM discussed above, in which human-centred approaches were also sidelined.

Following the principles of a data welfare state, datafication in the corrections sector needs to follow the four pillars of non-bias, decommodification, diversity, and transparency. More specifically, if data-based prison technologies are to avoid comprising the principles of the data welfare state, they would need to strive for nondiscrimination by avoiding specific biases (the first pillar), for example, by not predicting recidivism based on risk-scoring and historical data, instead allowing for rehabilitation and social mobility of incarcerated individuals. Additionally, datafied corrections within the welfare framework should strive for decommodification (the second pillar) by reducing the presence and influence of commercial actors in the corrections sector, including going beyond the problematic discourse of technological backwardness currently dominating and justifying datafication projects in public–private partnerships. In order to uphold the third pillar (data diversity), the corrections sector should strive to acknowledge diversity and work against algorithmic standardisation that forestalls the importance of individual needs in, for example, rehabilitation. Lastly, the corrections sector should strive for transparency (the fourth pillar) in terms of meaningful information for incarcerated individuals on how decisions impacting their everyday life – for example, about their placement, work assignments, and programme activities – are made, and in which ways algorithmic systems influence these decisions.

Concluding remarks: Towards a data welfare state

While the three cases of datafication examined here – automated decision-making within employment services, data-driven methods within public service media, and the digitalisaion of the corrections sector – might seem very different, they all represent central domains of the welfare state, which citizens must have a certain amount of trust in. The Nordic countries have a long tradition of trust in social and political institutions, including a strong public belief that these institutions support and underpin general social equality. What further unites the three cases is that the automated and data-driven methods employed in each case builds on the ideology of dataism (van Dijck, 2014). As cited by the Danish Agency for Digitalisation, there is a general belief that ADM can deliver “higher quality” welfare state services (The Danish Government, 2019: 11), and that “algorithms will secure equal treatment by being objective, unbiased and independent from personal conditions” (The Danish Government, 2019: 7). As illustrated in the analyses above, the three cases highlight various risks that come with the embrace of datification.

Furthermore, it is common for all three cases – as it is for datafication in the Nordic welfare states in general – that they must be understood in the wider context of financial pressure. Indeed, the fantasy that datafication will lead to faster and more efficient handling of public services goes hand in hand with the desire to reduce costs.

We have pointed out that justice and non-bias in processes of datafication would be one of the pillars of the data welfare state. Our analysis has shown that automated decision-making comes with risks of injustice and bias. Another pillar that was discussed was decommodification. Our analysis of data-driven methods in the PSM domain shows that datafication processes often follow a commercial logic and hence, instead of contributing to decommodification, rather enhances commercialisation. Furthermore, datafication approaches within the welfare sector rarely explicitly enhance data diversity – meaning approaches that nuance difference – and rather reinforce the standardisation and flattening of identities. And lastly, datafication of welfare provision is often connected with issues of black-boxing and intransparency of how, for example, automated decisions were reached. Furthermore, many automation and datafication projects do not explicitly take sustainability into consideration.

While the three examples show specificities about current datafication processes in the Nordic countries, they also illustrate how the digital imperative is intertwined with the ideology of dataism in the Nordic setting. While all the Nordic countries have explicit AI strategies and aim to be digital societies, the ideal of the digital and data-based welfare state has not yet been reached. We end this article by outlining principles, based on the four pillars, which must be met in order to create a data welfare state that corresponds with the democratic welfare state ideals.

First, such principles must include the nondiscrimination of citizens affected by digital welfare technologies; this would imply the prevention of biases and discrimination encoded in digital infrastructures. Second, noncommercial forms of data capturing and the development of nonproprietary systems, guaranteeing fair use of citizen data, would be essential. Third, clear legal frameworks should be enacted to regulate datafication and data usage in emerging technologies, such as ADM and machine learning. Hitherto, Nordic governments have emphasised ethical guidelines and recommendations and have only just begun the work with legal frameworks. Furthermore, when engaging in ethics, Nordic countries have focused on securing privacy, rather than engaging in broader issues of social justice and nondiscrimination. In addition, transparency is crucial for the ethical use and evaluation of data, and thus essential for any welfare state institution or operation. Fourth, policies to support and regulate datafication in the welfare state should be durable and consensual, as proposed for the media welfare state. As our case studies have demonstrated, there is a clear need for more sustainable, human-centred approaches.

The four pillars of non-bias, decommodification, diversity, and transparency should be seriously considered in any new digitalisation or automation project within the Nordic welfare states. Furthermore, any long-term datafication process for the public good should involve the active contribution of media and communication scholars, who are uniquely positioned to provide concrete suggestions based on critical, empirical research integrating the perspectives of citizens and vulnerable groups.

eISSN:
2001-5119
Lingua:
Inglese
Frequenza di pubblicazione:
2 volte all'anno
Argomenti della rivista:
Social Sciences, Communication Science, Mass Communication, Public and Political Communication