This panel is based on two observations: (i) that digital media are designed such that they track and record people's interactions, behaviour and preferences; they are by design surveillance machines, and (ii) that different perspectives can be taken when observing people's interactions with digital media; from above (
In this commentary, I will advance the idea that as we think about and research the ethical implications of digital media by focusing on their inherent surveillance capacities, we need to be conscious about the conceptual perspective we take. While the title of this panel suggests that
Privacy has historically been conceptualised as freedom from intrusion, protection of the private sphere, the right to be left alone, and other similar notions (Solove 2008). Agre (1994) argued that the typical model for privacy has been the “surveillance model” (Agre 1994: 105), which has focused on data collection and the use of that data. I will in this commentary propose that a different model of privacy is needed when it comes to big data and digital media, namely a datafication model of informational privacy.
There are currently two major conceptualisations of the privacy of (personal) information: one that regards privacy as the ability to “limit or restrict others from information about” oneself (Tavani 2008: 141) and another that views privacy as the “control of personal information” (Solove 2008: 24). Both concepts operate on the assumption that information is something that can be controlled or to which access can be restricted. Data or information are typically regarded as objective entities that exist and it is assumed, though often unarticulated, that there is a direct and true correspondence between the data or information and the actual state of affairs in the world – hence the notion of
In the age of big data, personal information has become a commodity that is traded on the market of information empires and between data brokers. Personal information is something that holds monetary value: “personal information can be viewed as a kind of property that a person can own and negotiate within the economic or commercial sphere” (Tavani, 2008: 134). The digital information society has brought about an information utopia where the use of computers and network technologies track all the activities humans engage in, though Winner (1986) suggested as early as the the mid-1980s that “as a badge of civic pride a citizen may announce, ‘I’m not involved in anything a computer would find the least bit interesting’” (Winner 1986: 115).
Personal information, however, is not only valuable as singular pieces of data. Information about my age, marital status, profession, income, mortgage, address, credit score, health record, hobbies, employer etc. has some value in particular situations, but those data are only really valuable when they are assembled into a big dataset where predictive analytics is possible. In other words, when I control or restrict access to information about my recent purchases at the local petrol station I may enjoy privacy at that moment. I may decide to pay in cash, to decline their offer of a discount card, shield my face and licence plate from the CCTV cameras, etc. to protect my privacy and personal information. However, at that moment it may seem to be a relatively small piece of personal information to provide the petrol station with information about my purchases at that particular petrol station, for which the petrol station in return offers a decent discount on the already very expensive fuel. I may therefore decide to give away that small and insignificant piece of personal information to the petrol station. I get a discount and they get to know my fuel purchase pattern. Who cares how many litres of petrol I purchase anyway? However, once that information enters the pile of big data about me and my consumer segment, it is possible to gain insights about me that I may never have provided to anyone. The really interesting part is not what I purchased at the petrol station, but how that information together with other individual pieces of personal information that I have sold on the information market can reveal new information and insights about me. While I may control or restrict access to information about my fuel purchases, how would I control the new information and insights that can be computed about me from the pile of big data?
The traditional approach to restricting, limiting and controlling access to personal information “has remained largely unchanged since the 1970s” (Solove 2013: 1880). The traditional approach has been to ask people to consent to the collection and use of their personal information and the basic assumption has been that people are able to “make conscious, rational and autonomous choices about the processing of their personal data” (Schermer, Custers & van der Hof 2014: 171). This approach obviously fails today, now people are asked to enter several consent agreements on a daily basis as they navigate the digital information environment and use digital media. In some instances, people consent without reading the agreements in full, and often they do not understand the details of the agreements they enter. In other words, the traditional approach to privacy needs to be reconceptualised and reconsidered.
The important question, however, is not whether big data and digital media increase the risk to privacy, because the right to privacy is clearly at risk in the digital information society. The real question is whether big data and digital media fundamentally change the character of the risk. If the risk to privacy is merely larger in the digital information society, then the laws and rules that currently protect privacy may still work in the new information age; all we need to do is to redouble our existing efforts. However, there are clear indications that the problem has changed. The traditional approach to privacy protection through consent and the ability to restrict, limit and control personal information comes short given new information and communication technologies. In other words, we need new solutions and new conceptual approaches to understand privacy in the digital information society.
While there have been a number of proposals for new and improved understandings and definitions of informational privacy in the digital information society, it is my sense that we need to change the metaphors we use to discuss privacy. I will here follow Agre's (1994) programmatic paper, in which he argues that the notion of privacy ought to be re-conceptualised from a “surveillance model” (Agre 1994: 101) to a “capture model” (Agre 1994: 101). I build on Agre's work and extend it with a “datafication model” of privacy – I have discussed these models in more depth in a recent paper (Mai 2016a).
The objective behind the shift in focus from
I will use Agre's (1994) original, rather loose definition of a
The focus of the panopticon model of privacy is therefore on the tensions between the watchers and the watched, between public and private spheres, and on inherent power relations.
In the capture model of privacy the focus is on the codification of activities, the sociotechnical nature of computer technology, and the unclear purposes of data collection.
In the datafication model of privacy the focus is on the anonymous creation of new personal information, the reinterpretation and statistical analysis of data, and the commodified nature of personal information.
The three models of privacy presented here are not competing views or approaches to informational privacy; they present three different views of the same problem sphere. The three models highlight different aspects and different perspectives of the privacy situation, and as such allow us to research and focus on different aspects of the consequences of big data and digital media in the contemporary digital information society.
The purposes of introducing these three models of privacy to this panel on Big Brother and Little Sisters are (i) to allow us to question the presumptions and understandings about privacy and surveillance that are inherent in notions such as Big Brother and Little Sisters, and (ii) to present a conceptual framework of privacy that allows us to handle privacy challenges created by the production of new knowledge that big data analysis and digital media usages generates. These three models of privacy – and perhaps especially the datafication model of privacy – could form the ethical basis for new digital media research and practice.