Accesso libero

Public AI imaginaries: How the debate on artificial intelligence was covered in Danish newspapers and magazines 1956–2021

INFORMAZIONI SU QUESTO ARTICOLO

Cita

Introduction

In recent years, the digital agenda in Denmark and elsewhere has been marked by the launch of a number of public strategies focusing on developing ethical guidelines regarding the implementation of artificial intelligence (AI) in society (Robinson, 2020). In these guidelines, AI – an umbrella term covering a variety of technological solutions and machine learning methods – is characterised by increasing investment and optimism, but at the same time is linked to a number of cases where its use in society has been regarded as problematic (Kerr et al., 2020). Such aspects play a central role in public meaning-making and people's encounters with AI. Public media venues, including so-called legacy media, have played a central role in covering the topic and have thereby fuelled considerable debate on AI in Denmark and globally.

From a historical perspective, printed newspapers and technology magazines have been particularly prominent in this regard, and active for a much longer period than digital media outlets, serving to set a public agenda both before and during the period when AI was coined as a term in 1956 (Markoff, 2015). Thus, newspapers and magazines have played a central role in shaping collective perceptions and public imaginaries of AI. This has been combined with political strategies putting AI on the agenda, and as a result, AI has become a much-discussed topic publicly. However, if we are to understand the importance of AI in society and use it in responsible, human-centric ways, we must understand the background against which the AI debate has taken place.

Being one of the world's most digitised countries (e.g., European Commission, 2020), Denmark presents an interesting case for studying public imaginaries of AI. As a small, innovative, and internationally oriented country, Denmark is known for prioritising new technology and quickly adapting to new technological opportunities (Digital Growth Panel, 2017; Ministry of Industry, Business and Financial Affairs, 2018). With official Danish political strategies putting the digital transformation of the welfare state high on the agenda, Denmark can be regarded as a visionary country that is perhaps ahead of others in relation to public debate in this area. The Danish context thus makes a particularly relevant case for studying how current public AI imaginaries about human–machine relationships are constructed, what they entail, and how they come to matter in society.

This article offers a historical analysis of the Danish print media's coverage of AI, which has been crucial in forming the background for a collective discussion of AI – reflected in what I refer to as the construction of public AI imaginaries about human–machine relationships. By way of a critical discourse analysis, supported by a quantitative content analysis of articles on the subject of AI, I investigate how such imaginaries have been constructed in Danish newspapers and magazines. I further identify how they have been significant in shaping the current strategic plans for AI in Denmark.

AI agendas, discourses, and framings – a global overview

Following the launch of several political AI strategies at national (Dutton, 2018) and transnational levels (European Commission, 2018), an emerging body of academic literature is focusing on the intersection of AI imaginaries, discourses, and framings. Some of this literature deals with AI strategies in national settings, such as Australia (James & Whelan, 2021), Germany (Köstler & Ossewaarde, 2022), the UK (Brennen et al., 2020), and China (Zeng, 2021), whereas other studies take a global or transnational approach (e.g., Bareis & Katzenbach, 2021; de Sousa et al., 2019; Jobin et al., 2019; Natale & Ballatore, 2020; Paltieli, 2021; Radu, 2021; Roberge et al., 2020; Sinanan & McNamara, 2021). For instance, Bareis and Katzenbach (2021) conducted an empirical study of how AI imaginaries are being presented in the national AI strategies of China, the US, France, and Germany. Their analysis reveals remarkable differences in the construction of AI imaginaries, although they identify a common rhetorical trait running through the strategies of framing the imagined AI future as an uncertain but inevitable path (Bareis & Katzenbach, 2021). These findings correspond with the ambiguity identified by Roberge and colleagues (2020) in their study on public AI discourse in three cases of public debates on AI imaginaries. As they argue, this ambiguity “comes with an uncertain and contingent future” and goes hand in hand with the “multifaceted hype” surrounding AI imaginaries (Roberge et al., 2020: 4).

However, previous findings also suggest that although AI imaginaries often operate on a global or transnational scale, they come to prominence when they are performed and articulated within national and local societal institutions and contribute to actual social practices (e.g., Schiølin, 2020). This includes national news media institutions, as shown in, for instance, Brennen and colleagues's (2020) study on AI imaginaries in British news media and Vicente and Dias-Trindade's (2021) study on how AI is framed in the Portuguese national press. Both studies point to a co-constructive link between national public news media institutions and national governmental strategies in the debate and digital agenda surrounding AI. Likewise, they identify a “persistent ‘hubris’” articulated though rhetorical and discursive patterns in news media that is based on the myth that machines will soon be able to outperform and replace human intelligence (Brennen et al., 2020: 33; see also Natale & Ballatore, 2020).

Notably, despite being a digital global frontrunner, Denmark is rarely studied as an individual case in the literature on the construction of AI imaginaries. Schiølin (2020), though, examines the discourses and imaginaries of the fourth industrial revolution (4IR) related to the development of AI in a specifically Danish context. He uses the concept of “future essentialism” – the “discourses, narratives, or visions that, through different means and practices […] produce and promote an imaginary of a fixed and scripted, indeed inevitable, future” – to analyse the performative role of these imaginaries, in which they ultimately define and govern specific future scenarios (Schiølin, 2020: 543). One example Schiølin cites is how Danish politicians in the Siri Commission (a Danish commission set up for developing recommendations regarding AI and digitisation) reproduce internationally held 4IR imaginaries by simply asking “how to harness and steer this predetermined future” for Denmark, instead of “questioning the premises of 4IR and its desirability, or envisioning other possible destinations for the future” (Schiølin, 2020: 555). Hockenhull and Cohn (2021) approach imaginaries related to Denmark's AI agenda in an ethnographic study of private and public Danish tech events. As found in some of the above-mentioned studies on national AI strategies, Hockenhull and Cohn (2021: 304, 317) emphasise the rhetorical nature of constructing AI imaginaries that are “taken up in practices” as they are socially performed by corporate agents towards a wider public at Danish local tech events and conferences, through “different articulations of hype”. This article complements and further expands such research by casting a wider net across AI as it has been articulated in public debate over time and exploring how this has influenced current AI strategies.

The general discussion of contemporary scholarship on the digital agenda and AI strategies indicates that the national setting serves as an important venue for studying how specific AI imaginaries are constructed, what they entail, and how they contribute to formulating current national AI strategies, as well as how national news media play a role in shaping the public construction of AI imaginaries. In this article, I investigate Danish news media, technology magazines, and official strategies as important places to look for ways in which the concepts of AI, human, and machine are collectively imagined.

AI imaginaries

Jasanoff (2015a: 11) argues that people's articulation of imaginaries about technological phenomena are sociotechnical by nature; they are constructed through social practices and “characterised by a number of specific collective visions of what is in the interest of society”, including individual and collective perceptions about technology's future impact on humans. More specifically, Jasanoff refers to them as “sociotechnical imaginaries”, defined as,

collectively held, institutionally stabilized, and publicly performed visions of desirable futures, animated by shared understandings of forms of social life and social order attainable through, and supportive of, advances in science and technology.

(Jasanoff, 2015a: 4)

Central to this definition is that imaginaries can be articulated collectively within smaller groups in society. In addition, several different and conflicting imaginaries can coexist in a way where societal power institutions, such as the media, have the ability to prioritise and legitimise certain imaginaries above others (Jasanoff, 2015a, 2015b).

In line with Jasanoff's (2015a, 2015b) definition of sociotechnical imaginaries, scholars have studied how (big) data imaginaries (e.g., Beer, 2019; Lehtiniemi & Ruckenstein, 2019; Olbrich & Witjes, 2016; Ruppert, 2018; Williamson, 2017) or algorithmic imaginaries (e.g., Bucher, 2017; Lomborg & Kapsch, 2019; Mager, 2015; Seaver, 2017) feed into social change. For instance, Beer (2019) highlights the sociotechnical nature of imaginaries in an analysis of how visions of (big) data futures instigate social practice when they are deployed as “actual data solutions” within the data analytics industry. In a similar vein, Bucher (2017) examines the “productiveness” of public text-based reflections on algorithms and on the ways in which public spheres (i.e., spaces where people meet and discuss algorithms) shape not only the way people think and talk about algorithms, but also the practical and technological aspects of the algorithm itself. This correlates with Jasanoff's (2015b) description of how such articulations are part of an ongoing, dynamic, and socially constructed process, where specific perceptions of technology in future scenarios emerge and become embedded within society; over time, some of these perceptions will stabilise and become more prominent than others.

Furthermore, Bucher (2017) points out how people tend to construct their own imaginaries in situations where technology is considered non-transparent or too complex to comprehend. As such, the concept of imaginaries and their sociotechnical nature can be used as a framework for studying advanced technological phenomena and their social impact, even when they are difficult to gain access to (Bucher, 2017). This resonates with Lomborg and Kapsch's (2019) way of approaching the same phenomena. Rather than trying to “open the black box itself”, they use the concept of decoding to understand how Danish media users imagine and respond to algorithms in their daily lives (Lomborg & Kapsch, 2019: 2). Thus, literature on sociotechinal imaginaries invites us to study how they are publicly articulated, as this is essential for understanding “technology in the broad sense of shaping our society” (Lomborg & Kapsch, 2019: 4).

Because of the rapid and diversified nature of how advanced technology is currently being developed, we cannot only study the sociotechnical imaginaries in isolated fragments of the advanced technological systems of today, such as big data and algorithms. Instead, we must direct our attention towards the broader range of technological phenomena and solutions encompassed by “AI” in order to better understand how imaginaries about today's systems are constructed and how they might feed into actual social practice, and ultimately, lead to change. This includes taking into account the different technological approaches to AI that have emerged as different sets of collectively held visions and imaginaries of AI.

IA – a human-centered approach to intelligent systems

The notion of intelligence amplification (IA) plays an important role when studying how AI imaginaries come about, since IA represents an alternative approach to the implementation of intelligent systems. The notion of IA developed in 1956 in parallel with AI (Markoff, 2015), thus offering two visions for developing intelligent systems. AI is based on a vision of automation and aims at developing machines able to achieve a level of intelligence in line with humans, so they are able to take over human tasks (Jarrahi, 2018). In contrast, IA is based on the idea that by combining the strengths of machines and humans symbiotically, technology can amplify human intelligence (Guszcza et al., 2017; Jarrahi, 2018; Licklider, 1960). In this scenario, humans are considered the entity leading the collaboration with machines, whereas the ultimate goal of AI is to develop machines that are able to make decisions on behalf of humans. Thus, in contrast to AI, IA presents a human-centric approach, targeting some of the concerns that have been subject to discussion regarding AI and the human–machine relationship, such as the fear that machines will eventually supersede humanity (Guszcza et al., 2017).

Although IA as a term has not been used as extensively as other terms for similar technological solutions – such as decision support systems (DSS), recommender systems, and automated decision-making systems (ADM) – the fact that it developed in parallel with AI reflects much of the focus on ethics and human involvement that can be seen in today's AI debate, both in news media and AI strategies. In this way, IA represents another category of AI that, at its core, takes humans as the starting point by focusing on aiding and augmenting, rather than replicating human intelligence. It is this aspect of IA that seems to have led to the current focus in AI strategies on developing ethical systems and solutions to benefit society.

Thus, the differentiation between AI and IA makes it possible to point out some of the fundamental differences in the construction of visions for developing AI that are seen to this day. These differences are central to the following analysis of how AI imaginaries are constructed over the entire history of AI being featured in media and strategies. I therefore take AI and IA to represent different sociotechnical imaginaries that are reflected in a set of fundamentally conflicting discursive positions in the current AI debate. Furthermore, I posit that understanding the construction of AI imaginaries requires a discursive study of how these specific imaginaries are framed and communicated in linguistic, discursive, and social practices.

Linguistic, discursive, and social practices

Fairclough (2013: 418) defines discursive practices as “ways of representing the issues, the potential benefits, the risks and dangers” of a given subject. He argues that imaginaries can be seen as “representations of how things might or could or should be [...] – projections of states of affairs, ‘possible worlds’” (Fairclough, 2013: 266). Such instances of discourse are thus “positioned representations […] – positioned in the sense that different positions in the social relations of social practices tend to give rise to different representations” (Fairclough, 2013: 443). This indicates that as different types of discourse represent reality in different ways, they can be regarded as representing different “positionings” of the same dynamic context that both shape and are shaped by such discursive positionings (Fairclough, 2013; Halkier, 2011). Specific elements of such dynamics might be, for example, media texts (discursive events), including instances of language and discourse expressed through texts (discursive practices) in which choices are made which emphasise some perspectives rather than others. This is closely related to Fairclough's (2013) notion of hegemony, which refers to the dominance and valorisation of a specific set of beliefs and values over others, articulated and performed by an influential group or power institution. At the same time, it is part of a bigger social context which can be seen as constituting “ways of acting and interacting with other people in speech or writing [emphasis original]” whereby social relations are being produced and reproduced (Fairclough, 2013: 418). I operationalise the discursive analysis of sociotechnical imaginaries of AI (and IA) by addressing linguistic, discursive, and social practices in turn.

Methods

As part of the definition of sociotechnical imaginaries, Jasanoff (2015a) describes how societal power institutions, such as media institutions, influence the positioning of particular imaginaries over others and thereby influence which future visions of technology come to the public's attention. As I regard media as a central empirical source for studying imaginaries, for this study I collected a range of articles from Danish newspapers and technology magazines as well as official Danish strategies on developing and implementing AI in society. The data corpus covers the persistence and temporal development of media coverage about AI and the human–machine relationship, which are crucial to establishing sociotechnical imaginaries (Jasanoff, 2015b). I take Danish newspaper and magazine articles to represent general societal communication, influencing the way in which humans perceive and understand themselves as part of the world they live in and the way in which they perceive technology (Jensen, 2013). Furthermore, official reports published by the Danish government give insight into the particular AI imaginaries that are being stabilised and legitimised in Denmark and elsewhere (Jobin et al., 2019).

To build the corpus, I carried out a systematic selection of all newspaper and magazine articles on humans, machines, and AI from January 1956 to January 2021. I searched for and extracted these articles from the Mediestream and Infomedia archives (Mediestream, n.d.; Infomedia, 2021), which offer digital access to news published in Denmark. Mediestream was used to search for articles from 1956 to 1990, and Infomedia for articles from 1990 to 2021. While it is possible that the sampling was affected by Mediestream's and Infomedia's different archiving structures, I allowed an overlap of ten articles to ensure that both archives provided the same results from 1990 onwards.

To consider historical changes in the discourse associated with the debate on AI and the human–machine relationship, the archives were searched for longer articles (700+ words) written in a Danish context. More specifically, the archives were searched for articles containing the Danish words for the following: “human”, AND “electro brain”, “robot”, “machine”, “data machine”, or “electric calculator”, AND “artificial intelligence” (including Danish variants “kunstig intelligens” and “artificiel intelligens”), “simulated intelligence”, “machine learning” (including Danish variants “maskinlæring” and “maskinel læring”), “expert system”, or “expert support system”. The first group of keywords in the search string included several words representing the historical changes in how a computer has been referred to in Danish. In a similar vein, the second group of keywords represented the historical changes in the Danish language for what is today typically known and referred to as artificial intelligence. The keywords were identified via an initial chain search in literature on the field, including that written by the pioneer in Danish computer science research, Peter Naur (1954).

The sampling of official strategies was carried out by searching for publications on the website of each Danish ministry. Due to the narrow search options on these web-sites compared with the two media archives, the websites were searched for strategies containing one of the Danish words for AI, as listed above, within the same time period as the gathered articles. The sampled strategies from public agencies address current societal matters directly, as they concern the contemporary implementation of AI in Danish society.

The search resulted in a total of 253 articles (217 newspaper articles and 36 magazine articles) and four public strategies. General patterns in the articles were identified using quantitative content analysis, which then provided the background for a deeper investigation into the qualitative nuances of public discussions about AI, humans, and machines in the articles and strategies. An initial pilot test (coding 4% of the articles found) was conducted in order to create a coding manual for the categorisation of the patterns in the remaining articles. This resulted in eight consistent categories: media outlet, date, sentiment, perspective, context, actor, human, and technology (since these categories form the background for the qualitative analysis, the overall findings from the quantitative content analysis are not reported further). I then coded the full corpus of newspaper and magazine articles according to these categories. The results showed that the coverage of AI and the human–machine relationship increased significantly over the period analysed across several Danish media outlets, although this is far more pronounced in the daily and weekly newspapers than in the technology magazines included in the search (see Table 1).

Number of AI articles in sampled media, by outlet

Medium Outlet Number of articles %
Newspapers Politiken 46 21
Berlingske Tidende 41 19
Jyllands-Posten 40 18
Information 36 17
Weekendavisen 23 11
Kristeligt Dagblad 11 5
Børsen 8 4
Ekstrabladet 4 2
Avisen Danmark 3 1
B.T. 3 1
Arbejderen 2 1
Total 217 100

Magazines Ingeniøren 12 33
Alt om Data 7 19
Prosabladet 6 17
Computerworld 5 14
Medicoteknik 3 8
Automatik & Proces 1 3
Teknisk Nyt 1 3
Teknovation 1 3
Total 36 100

The articles spanned from the 1980s to the 2020s. Thus, we can see a time lag from the time when AI and IA and the focus on the human–machine relationship developed in the academic field until it reached the public arena in the print media of the 1980s (see Table 2).

Number of AI articles in sampled media per decade

Years Number of articles %
1956–1959 0 n.a.
1960–1969 0 n.a.
1970–1979 0 n.a.
1980–1989 9 4
1990–1999 39 15
2000–2009 46 18
2010–2019 147 58
2020–2021 12 5
Total 253 100

Comments: As the data collection spanned from 1956–2021, the first and last time ranges are not full decades.

From the 1980s onwards, the overall number of articles increases each decade (except for the 2020s, where only two years were included, resulting in 12 articles), indicating a growing interest in AI, which follows the generally increasing research into and public interest in AI, for example, as seen in recent publications of official strategies for developing AI in Denmark and elsewhere (Jobin et al., 2019; Robinson, 2020). Based on the general patterns across the articles sampled for the qualitative analysis, Fairclough's (2013, 2015) critical discourse analysis approach was used to address the discursive aspects of the AI imaginaries articulated in the articles. Therefore, using Fairclough's (2013) three-dimensional framework, the critical discourse analysis addressed 1) the text dimension, 2) the discursive practice dimension of the articles, and 3) the social practice dimension of the strategies. This made it possible to examine the chains of equivalence connected to AI as a nodal point for analysing linguistic and discursive patterns, and later to gain an understanding of the significance of such AI imaginaries for larger societal structures. Any headlines and qhotations from newspaper and magazine articles used in the analysis and in the following discussion are my own English translations of the original Danish versions.

Using the text dimension as the first lens of analysis made it possible to address the articles at word level and identify how particular ways of communicating about AI and the human–machine relationship are initiated. For instance, in the headline “Increasingly clever computers on the way to conquering the world” (Alt om Data, 2017a), the chain of equivalence “increasingly clever” that is connected to the computer as a nodal point was interpreted as a personification in the way the computer is being associated with a certain type of knowledge and reason usually used to describe human qualities.

The second analytical lens, the discursive practice dimension, allowed the examination of the composition of words, their conveyance of attitudes, and the use of intertextuality. Using this lens, the sentence above was interpreted as an expression of how the computer is being imagined as a powerful and particularly independent entity that will be able to supersede humans. The description of its path towards “conquering the world” was additionally interpreted as a way of painting a dystopian and uncertain picture of how technology might gain power over humans, which was supported by other statements from the article on how robots “are ready to make our decisions, take our jobs, and maybe soon our entire livelihood” (Alt om Data, 2017a).

Lastly, the social practice dimension made it possible to unmask the ways in which the use of language becomes a social practice as it is linked to a specific societal and historical context and used by the Danish authorities to prioritise certain AI imaginaries above others. For instance, the same type of uncertainty and futuristic image painted of AI and its ability to overrule humans, identified in the above example, was likewise identified in the Danish AI strategies: “It is impossible to say for certain when – or whether at all – we will reach a point where computer technology can reason at a human level” (Danish Government, 2019: 6). However, it was followed by the idea that AI should not replace, but assist humans: “Artificial intelligence should help us analyse, understand and make better decisions. […] The technology cannot, and should not, replace people or make decisions for us” (Danish Government, 2019: 7). Using this third analytical lens, it was thereby possible to examine how the latter idea, which resonates with IA, was the one being legitimised by the Danish authorities. Thus, the following analysis focuses not only on the relatively vague notions of what AI is, but also on how the human–machine relationship itself is framed from the two parallel overarching sociotechnical visions for the practical development of advanced technology: AI and IA.

Constructions of AI imaginaries about the human–machine relationship
Text dimension: Personifications and use of active language

Based on the above contextual insights, the starting point of the critical discourse analysis was examining word choices at the text level. Here, one recurring feature was the personification of AI, the nodal point, mainly expressed though chains of equivalence in the headlines ascribing human qualities to the “machine”, “computer”, or “robot”: “The wars of the future will be fought by autonomous robots” (Alt om Data, 2017b); “Employed by a machine” (Avisen Danmark, 2018); and “This is Shimon. He is a robot and plays the marimba” (Politiken, 2019). The common thread is not only ascribing human attributes, but also describing the machine, computer, or robot as an autonomous entity, able to perform tasks like – or in some cases, even better than – human beings. Clearly, these are descriptions that correlate with the traditional AI vision of developing machines able to take over human tasks (Jarrahi, 2018). Furthermore, this pattern of AI personification appears to be consistent irrespective of the article's angle. For instance, the three headlines above were extracted from articles from the 2010s that were negative, neutral, and positive, respectively, about AI (i.e., pessimistic, neutral, and optimistic towards the social ramifications of AI). In addition, they represent a general pattern revealing the coexistence of several competing AI imaginaries. This seems to be the case throughout the period covered, as demonstrated in the following examples of headlines extracted from positively, neutrally, and negatively angled articles from the 1980s, 1990s, and 2000s, respectively: “Commodore versus Sinclair in chess” (Jyllands-Posten, 1984); “Cold-hearted robots” (Information, 1999); and “Robots are stupid” (Information, 2000).

Closely linked with personifying and rendering AI as autonomous entities is the use of verbs underlining the machine's ability to act: for example, “artificial intelligence can be governed, but we must not be naïve” (Kristeligt Dagblad, 2019); “robots are threatening the middle classes” (Politiken, 2016); and “the robots are coming” (Politiken, 1993; Berlingske Tidende, 2000). This is seen especially in articles coded as negative or neutral towards AI, but can also be found in some positively angled articles, where the robot appears as the central actor. In these cases, the use of language has the function of linking the presence of AI, in the form of, for example, robots, with a situation that is described as posing a threat to humans. This would seem to point to an imagined world which is changed as a result of developments in AI, where the choice of words, together with the accompanying photos and illustrations (see Figure 1–3), expresses some kind of action and initiates a malleable narrative that seeks to appeal to the reader's imagination.

Figure 1

Facsimiles of two newspaper articles, 2000s

Source: Weekendavisen, 2000 (left); Berlingske Tidende, 2005 (right)

Figure 2

Facsimiles of two newspaper articles, 2010s

Source: Kristeligt Dagblad, 2017a (left), illustration by Rasmus Juul; Kristeligt Dagblad, 2019 (right), illustration by Rasmus Juul

Figure 3

Facsimile of a newspaper article, 2010s

Source: Weekendavisen, 2019

Discursive practice dimension: Intertextuality, positionings, and linguistic conflicts

The above textual features illustrate how the articles appear to draw on intertextuality to a large extent. Notably, references both to science fiction as a genre and to the computer's chess victory over a human opponent in 1997 appeared frequently in the articles. In one example, intertextuality is used as a linguistic tool in the construction of a malleable AI imaginary involving particular perceptions of the human–machine relationship as part of the future development of AI technologies in general: “Two years later, Garry Kasparov announced that he would withdraw from international chess with immediate effect. This also put a tentative end to the battle between the creative, intuitive human being and the raw, calculating computer” (Jyllands-Posten, 2005). The construction of the AI imaginary emerging from the article is based on prior discourses used to shape what is said about the future, thereby conveying a preferred reading of AI and humans. More specifically, the example underlines the use of intertextuality as a discursive resource that is present throughout the articles, emphasising two parallel perspectives on the human–machine relationship.

The first AI perspective adheres to the notion that eventually machines will supersede humans (see Guszcza et al., 2017): The chess game against the machine underlines the reality of this scenario, while the science fiction references fuel the construction of an imaginary where AI is associated with advanced technology and futuristic elements such as robots. Additionally, the science fiction genre in itself, characterised by the attempt to imagine and comprehend an uncertain future in terms of technological development, is used in an attempt to grasp and explain the future, real-world development of AI.

In contrast, the IA perspective regards the different abilities of humans and machines as complementary (see Licklider, 1960), as seen in the second half of the example: “the creative, intuitive human being and the raw, calculating computer” (Jyllands-Posten, 2005).

In addition, the idea of uncertainty – circling around the unknown and imagined future development of AI in society – was clearly evident in most of the articles, but especially in those that discursively expressed a neutral sentiment towards AI. In taking a neutral stance, these articles were not only characterised by expressing uncertainty about the future of AI and the human–machine relationship, but also by the underlying notion of passivity. I suggest that the neutral language used indicates an underlying acceptance that the development of AI is taking place without any human involvement. Similarly, the technological aspect – including the advanced forms of AI that do not yet exist – was partly reinforced by the use of intertextuality and references to the science fiction genre, and partly described as something simply “coming”, thus indirectly writing any human influence out of the equation. Together, the neutral articles constituted a discursive position where the development of AI was portrayed not as positive or negative, but as uncertain.

At the same time, the negatively angled articles clearly painted a gloomy picture of AI in the future: “This past week, killer robots have been on the agenda at a UN meeting […]. The intention is to establish a set of principles that will regulate the development of killer robots on the battlefields of the future” (Jyllands-Posten, 2018). The negative articles thus constitute a different discursive position, characterised by the construction of a dystopian AI imaginary.

This dystopian notion differed markedly from the positively angled articles. For instance, one article stated that “the film A.I. […] is not pure science fiction. Robots can both learn to feel and think, says British scientist Professor Kevin Warwick. Humans are too primitive. The whole species needs to be upgraded” (Jyllands-Posten, 2001). Here, AI is intertextually connected with notions of how technology makes a positive contribution to humans, and how robots can be used in various contexts for the benefit of human development, in line with the vision of IA.

These three patterns of sentiment reflect a deeper complexity across the sampled articles, consisting of different, competing discursive positions in the construction of AI imaginaries over time. Rather than a collective consensus constructing just one AI imaginary, the ways in which the articles position themselves discursively imply the coexistence of several AI imaginaries – including positive, negative, and neutral perceptions of AI and its long-term consequences for humans.

Despite the existence of differently positioned AI imaginaries, the articles’ use of intertextuality shows how they all draw on a common AI discourse, characterised by a traditional, futuristic, and science fiction-inspired conception of what AI is and will become. This type of discourse recurs in articles from each decade and increases from the 1980s to the 2010s, particularly in articles where robots or science fiction characters constitute the most prominent actor. This indicates that it is the same futuristic discourse that is the driving force in influencing public perceptions of AI and the human–machine relationship today as in past decades.

Although the articles all seemed to refer to futuristic conceptions of AI, irrespective of any differences in terms of discursive positions and competing AI imaginaries, more than half of the articles referred to cooperation between humans and machines, rather than machines simply replacing humans. This points to a linguistic conflict present in the articles analysed. In practice, the articles thus referred to a relationship based on typical IA principles rather than typical AI principles. In contrast to the linguistic insights above, a common discursive dimension characteristic identified in the majority of the articles was that they tended to not distinguish between the futuristic notion of machines replacing humans (AI) and the more human-centred idea of humans and machines collaborating (IA). Of the articles referring to IA in practice – albeit still adopting the futuristic notion of AI – most can be regarded as taking either a positive or neutral angle, while most of the articles referring to typical AI principles – both linguistically and in their description of the relationship in practice – take a negative or neutral angle. This seems to show that the image of technology as benefitting humans typically appears in positively angled articles, while images of technology as a replacement for humans typically appear in negatively angled articles.

Two main scenarios emerged from the articles, referring either to IA or AI principles. The first scenario refers to IA principles. In one example, the machine is portrayed as an amplification of human ability: “The robot is able to review thousands of journals and documents in a split second and, on that basis, propose the best form of treatment” (Kristeligt Dagblad, 2018). Here, the differing abilities that humans and machines have that enable them to complement and collaborate with each other is referred to. The description of the relationship in practice is thereby similar to the vision of IA, although the article refers to the machine as a robot and therefore does not distinguish between what comes across as typical IA principles in practice and the traditional science fiction-inspired perception of such technology (such as robots).

The second scenario refers to principles that could be regarded as typically AI. One example refers to the use of technology as assisting humans, which resonates with IA, but at the same time, the fear of what technology might come to mean in the future is then linked to the perception that technology can replace humans (resonating with AI):

Robots are being developed with the precise aim of assisting us in our most intimate activities. Firstly, there will be toy robots for the country's children under the Christmas tree this year. Secondly, sex robots are growing in popularity. Soon we will not have to worry at all about finding suitable sex partners […]. A third area where robot research is in rapid development is the care sector. The competitive state has a hard time taking care of all the people who may need looking after. Don’t worry: the robots are on their way.

(Kristeligt Dagblad, 2017b)

Although this example primarily refers to an IA principle, the negative angle taken by the article is linked to the AI goal of developing technology that can take over tasks which typically require humans to perform them. Again, this is indicative of the common trait running through the articles that a futuristic image of advanced technology is still shaping public perceptions and constructions of AI imaginaries.

Despite AI in practice being embedded into various everyday aspects of public life, where it is conceived of as supporting rather than replacing humans, as seen in the newspaper articles referring to IA principles, the public perception of the topic still draws on futuristic images and chains of equivalence related to the nodal point of AI and the human–machine relationship. As shown above, this pattern seems to be consistent across all the decades covered, dating back to the first stirrings of public debate on AI in Danish print media in the 1980s. This has resulted not only in the multivocal public AI imaginaries that compete in terms of having different sentiments towards AI in society, but also the construction of ambiguous AI imaginaries across different kinds of sentiments, where no distinction is made between the futuristic descriptions of the human–machine relationship of AI and the human-centred principles of IA.

Social practice dimension: Language use and practical visions for AI in Denmark

As in most of the articles, the official Danish AI strategies express underlying principles related to a vision of IA rather than AI. One strategy described how technology should “contribute to”, “support”, and “help the individual citizen” (Danish Government, 2019: 6). Other strategies imagined collaboration between human and machine rooted in descriptions of the unique capabilities of the machine: “The potential lies in the combination between structured data, competencies in the companies and availability of relevant technologies […] such as machine learning, artificial intelligence and computers with massive processing power for the enormous amounts of data” (Danish Government, 2018b: 43). AI technologies “are able to make decisions by identifying patterns in vast amounts of data [and] create prediction models for risk assessment or early detection of diseases” (Danish Government, 2018c: 43). In addition, however, the strategies also mentioned the human side of the equation, as in, “the human ability to learn new things and reflect” on them (Danish Government, 2019: 6). The ability to relate to issues in a specific context and situation were also mentioned (Danish Government, 2018a).

Central to the strategies was also their human-centred approach to technology, both as illustrated on the front page of each of the strategy documents investigated (see Figure 4) and as evidenced in the Danish authorities’ communication to the public and public services, as in the following example: “Denmark should have a common ethical and human centrered basis for artificial intelligence” (Danish Government, 2019: 8). Such notions fit well with Griffith and Greitzer's (2007: 49) description of the vision of IA as “a new vision of symbiosis […] with the human in a leadership position”. As in the following quote, the human–machine relationship is thus described in terms of collaboration rather than substitution: “artificial intelligence should not replace people, but instead be used by people to improve conditions for individuals and for society” (Danish Government, 2019: 43). More specifically, “artificial intelligence supports physicians in making diagnoses, thus making it possible to trace the development of diseases early on and provide preventive intervention” (Danish Government, 2018c: 48). In the same way, technology can help citizens “write, read and hear (intelligent hearing aids, reading aloud, etc.)” (Danish Government, 2019: 11). Hence, the vision of social change in Denmark brought about by AI is described in ways similar to the vision of IA. The strategies thereby establish an AI imaginary and position it so as to have an impact on the actual development of AI.

Figure 4

Cover pages of four official Danish AI strategies

Sources: Danish Government, 2018a (top left); Danish Government, 2018c (top right); Danish Government, 2018b (bottom left); Danish Government, 2019 (bottom right)

What is striking, however, is that the official strategies all continue to use the term AI to describe the vision of applying advanced technology in a responsible, human-centred way, even though this is in fact the vision of IA. For instance, one strategy described how “in the long term, artificial intelligence that resembles human intelligence could be developed” (Danish Government, 2019: 6) – here drawing on the same neutrally positioned discourse on uncertainty related to the development of AI seen in most of the articles, with their notion of how replacing humans with machines may become a reality. Despite this linguistic conflict, which can be found in both the articles and the official strategies, the actual vision that is stabilised and legitimised through the official strategies focuses on human-centred solutions and principles similar to those of IA. Thus, the Danish strategies seem to be moving away from the traditional, futuristic notions seen in the competing AI imaginaries of the articles, and towards a situation where, in contrast, the dominant imaginary is an IA imaginary.

Discussion

The analysis indicates how language use over time establishes competing notions of the human–machine relationship in the context of AI development. These notions can be seen as sociotechnical imaginaries – as AI imaginaries about the human–machine relationship. At the same time, these imaginaries have left the term AI itself relatively vague even after decades of public debate. Thus, the analysis paints a picture of a public debate on AI in Denmark where it is still relatively unclear what is meant by AI. There seem to be two aspects to this. Firstly, it seems that the fictional universe, drawing on traditional notions of AI, can help us understand advanced and abstract technological developments and applications. At the same time, it has helped to generate urgent public debate about the human component in AI development and use. Secondly, however, the vagueness of what is meant by AI and IA in the debate could almost be seen as an unconstructive attempt to cover up a loaded term like “AI” by connecting it through language use with human-centered chains of equivalence that popularly go under the name “ethical AI”.

Findings from previous literature on the “persistent ‘hubris’” of AI (Brennen et al., 2020: 33) are not uniformly confirmed by the analysis, which, in contrast, reveals a more multivocal public debate on AI. The ambiguity of AI, however, is confirmed in the analysis, in line with the previous study by Roberge and colleagues (2020) on public AI discourse, which looked at three cases of how the public perceived AI imaginaries. The different positionings expressed indicate a malleable AI imaginary, reflecting the nuances brought out by collective perceptions of AI, which in turn feed into the public strategies formulated as a result of the AI debate. In addition, they show how a particular imaginary, rooted in the public debate and based on IA, is officially legitimised in a Danish context, where it impacts the future public planning projects and social practices presented in the strategy documents. Although the analysis cannot anticipate how the discursive frameworks in the articles themselves will lead to change, they do reveal underlying discourses in the public debate on AI that compete to set the direction for the further development of AI in Denmark. Despite the fact that the strategy documents tend to not refer to IA explicitly, the original ideas associated with the IA vision seem to feature consistently in the strategies. As shown through the specific language used, these ideas have contributed to setting a “human-centred direction” for AI focusing on combining the strengths of humans with those of machines in a “symbiotic partnership” (Licklider, 1960).

Although newspapers and magazines are not the only co-creators of future systems based on AI, they do fuel public debate on the issue, thereby informing the process by which people can form opinions about AI in society. In addition, as in this case, such articles can highlight the human element in AI's development, including ethical aspects of responsibility and fairness. The analysis of linguistic and discursive patterns in the articles and strategies thus illustrates how language can be a powerful tool, particularly in the context of societal institutions, such as the media or political institutions. It can thereby become embedded within social practices, ultimately resulting in social change.

The nuances brought out in the construction of AI imaginaries illustrate how the ways in which we talk about technology – and the ways in which AI is portrayed through specific institutional use of language – shape how future strategic visions of AI are created. However, this is not only the case for the construction of AI imaginaries; in the construction of, for example, algorithmic imaginaries, Lomborg and Kapsch (2019) note how opinions are formed, both individually and collectively, and how algorithmic systems are decoded to shape future visions of such systems. This would seem to indicate that the way in which we communicate about AI, humans, and machines influences how we envision ourselves as humans in relation to AI, and how we use language in planning our future societal practices in relation to AI. The discursive ambiguity identified in the analysis, which is in line with previous literature (e.g., Roberge et al., 2020), suggests that different AI imaginaries are thriving and stimulating public debate on the topic. Yet, the continuous reference to fictive examples might arguably also prove unconstructive in furthering public understanding and debate, in that it risks causing uncertainty rather than describing accurately how human-centered AI might develop and operate in empirical reality.

From the analysis, we can conclude that as the strategies have so far stabilised a human-centred imaginary around developing and using AI in practice in a Danish context, they appear to reflect typical principles of IA rather than those of AI. Thus, with its emphasis on collaboration and optimisation, the notion of IA would seem to both reflect more accurately the visions legitimised in the strategies and highlight discussions about ethical principles more than the mechanically oriented AI perspective does, even though AI is the term used in the articles and strategies. The notion of AI is, therefore, often misleading; indeed, as Guszcza and colleagues (2017) note, the systems we have today – and want to develop in the future – are a long way from any system based on the original vision of AI. This is reflected in the analysis in the present study through the linguistic conflict arising from the use of language, where “AI” is used in references to the human-centric principles of IA while simultaneously being associated with futuristic perceptions of traditional AI principles. The actual imaginary resulting from this can be rendered as an IA imaginary – an ethical direction for AI.

As two different sociotechnical imaginaries, the notions of AI and IA can in themselves inform public debate. Understanding the link between language use and social change related to AI is essential if we are to understand “technology in the broad sense of shaping our society” (Lomborg & Kapsch, 2019: 4). Here, IA might offer a specific term that enables what Kerr and colleagues (2020: 10) refer to as an “ethics discourse”, that is, another way of imagining technology. In employing this kind of discourse, the strategies analysed in this study not only provide “assurance” about the development of ethical AI in society (Kerr et al., 2020: 10), but also function as an effective tool in imagining and planning the actual development of future human-centred solutions. As Jarrahi (2018: 579) claims, “what is lacking in this old discourse, as well as with the recent attention paid to AI, is a discussion of how the unique strengths of humans and AI can act synergistically”. Instead of the preoccupation with machines replacing all facets of human intelligence, imaginaries of the human–machine relationship based on IA can serve to set a more ethical direction for the future (Jarrahi, 2018).

Conclusion

The overall quantitative finding that AI has become an increasingly discussed topic in Danish printed news media served as a point of departure for the qualitative, critical discourse analysis in this article. The coverage of AI has been crucial in forming the background for a collective discussion of AI which is reflected in public AI imaginaries about the human–machine relationship. The analysis has provided insights into how several coexisting and conflicting discursive positionings are being constructed through a use of language that expresses positive, negative, and neutral AI imaginaries about the human–machine relationship. Whereas the positive imaginary focused on how AI can be used in various contexts for the benefit of human development, the negative imaginary was characterised by a dystopian view on the social ramifications of AI. The neutral imaginary portrayed AI development as uncertain and marked by an underlying notion of passivity towards the lack of human involvement.

Additionally, the analysis demonstrated that a futuristic discourse is still shaping public perceptions and constructions of AI imaginaries, although the vision for social change brought about by AI is mostly described in ways similar to the vision of IA. Although the strategies continue to use the AI term, the analysis further demonstrates how it is principles of IA that are being reflected and stabilised in the Danish strategies for AI in practice. Thus, despite these IA principles being legitimised with a preliminary endpoint in the official strategies based on the public debate in the articles, the analysis identified a lack of clarity about what is meant by AI, in both the articles and official strategies.

Ultimately, although AI in the public debate in Denmark is not characterised by persistent hubris, it is rather ambiguous in terms of what we mean when we talk about AI, how it may be used in the future, and its possible effect on society. This provides insight into how language can be a powerful tool in the construction of imaginaries that, in the long run, have the ability to shape practical solutions. It also illustrates the unconstructive nature of linguistic ambiguity in generating persistent uncertainty around AI, rather than committing to the construction of an AI imaginary that, at its core, serves human interests. Here, it seems that the notion of IA might be a more appropriate term for the human-centric visions expressed and legitimised in the strategies, serving as a linguistic tool to implement an ethics discourse to ensure fair and responsible human–machine configurations in society.

However, by applying an alternative notion for AI, the limitation is that we risk nudging future AI strategies and related social practices to only focus on what is considered to be IA, instead of focusing on making the advanced AI applications that have already been developed more human centric and true to the original visions of IA. Thus, to pave the way for new and more constructive and realistic ways of imagining human–machine relationships, we need more concrete examples of how AI is used in everyday contexts in partnership with humans. Future research looking at empirical cases of human–machine collaboration can help demonstrate and describe how this partnership works in practice and how it is applied to serve human interests.

eISSN:
2001-5119
Lingua:
Inglese
Frequenza di pubblicazione:
2 volte all'anno
Argomenti della rivista:
Social Sciences, Communication Science, Mass Communication, Public and Political Communication