Our relationship to technology is deeply paradoxical. On the one hand, we buy and constantly use more devices and apps, leaving our traces in the digital space. On the other hand, we increasingly fear the dark sides of being dependent on technology and of data abuse. Inadequate knowledge and errors make it difficult to predict unintended consequences, and often problems emerge due to deliberate choices to pursue some interests while ignoring others. Hot topics include data privacy, potentially biased or discriminating algorithms, the tension between free choice and manipulation, and the optimization of questionable outputs while ignoring broader effects.
Fighting unintended consequences should get to the roots of the problems. As for personal data, users should get more control over what they share. Further, more transparency can help avoid dystopian outcomes. It concerns the use of data, in particular, by algorithms. The high concentration of power of a few global players should also be watched closely, and societies need to be critical towards their actions and objectives. Even seemingly noble motives come at a price, and this price needs to be negotiable.
Automated and personalized interactions may increase the relevance of marketing offers, but they also have less positive economic and psychological consequences for consumers. Machine learning-based prediction algorithms can approximate individuals’ preferences and their willingness to pay at ever greater levels of precision, and companies can use this knowledge to charge higher individual prices. Typically, consumers freely hand over all the information necessary to reveal their preferences and it seems that they underestimate the value of their personal data. And there is another discomforting aspect of giving away personal data. It means giving up privacy and as a result loosing autonomy.
Preventing negative outcomes is typically a task for regulators but finding solutions can be difficult. Therefore, companies need to address consumer concerns in their policies as well. To avoid dystopia, managers need to take consumer psychology into account and resist the temptation to maximize short-term profits at the cost of consumers. Avoiding marketing dystopia is in the best interest of all market participants – at least with a longer-term perspective.
Some algorithms may have similar discriminatory tendencies to humans. The presented study investigates gender bias in social media advertising in the context of STEM careers. Results suggest that advertising algorithms are not gender-biased as such, but that economic forces in the background might lead to unintended, uneven outcomes. Spillover effects across industries make reaching some consumer segments more likely than others. A gender-neutral strategy is less likely to reach women because women are more likely to react to advertising. Therefore, targeting them is more expensive and economic forces unintentionally favor men. One potential solution could be running separate campaigns for men and women to target both demographic groups equally. However, anti-discrimination legislation in many countries does not allow companies to target employment ads to only one gender. So ironically, laws that are designed to avoid discrimination actually rule out a fairly simple way to correct the bias in online targeting on Facebook and other platforms, illustrating further need for policy guidance in this area.
In the 2016 presidential election, the vast majority of available polls showed a comfortable lead for Hillary Clinton throughout the whole race, but in the end, she lost. Campaign managers could have known better, if they had had a closer look at other data sources and variables that – like polls – show voter engagement and preferences. In the political arena, donations, media coverage, social media followership, engagement and sentiment may similarly indicate how well a candidate is doing, and most of these variables are available for free.
Validating the bigger picture with alternative data sources is not limited to politics. The latest marketing research shows that online-consumer-behavior metrics can enrich, and sometimes replace, traditional funnel metrics. Trusting a single ‘silver bullet’ metric does not just lead to surprises, it can also mislead managerial decision-making. Econometric models can help disentangle a complex web of dynamic interactions and show immediate and lagged effects of marketing or political events.
Even the dark web has its bright sides because it can be used as an unregulated testbed for technologies that will eventually appear on the surface. Also, it is a useful place to study consumer privacy and have a view of what the surface world might look like under an extreme level of consumer data protection. In such a world, even our best customers might look like never-before-seen individuals, until they decide to reveal themselves. If there is trust, and a worthwhile value exchange, consumers might be willing to share their data and not enact all of the hyper-privacy available to them. To seize the opportunities, companies should take stock of their customer relationships, hone their data needs, and learn what information is critical, advantageous, or irrelevant for their context. They should implement initiatives that drive choice carefully, in a trustful relationship.
The progress of artificial intelligence and new technologies triggers hot debates about the future of human life. While fans of the singularity say that artificial intelligence will become smarter than human beings and should take over the world, for others, such a vision is a sheer nightmare. Douglas Rushkoff is clearly part of the second group and takes a passionate pro-human stance. He explains why giving too much way to technologies is a mistake and why humans deserve a place in the digital future. Already today, technologies have a much stronger impact on our lives than most of us would believe. For him, being human is a team sport, and he asks for a more conscious use of technologies while keeping rapport with other people. To safeguard the humanness in a tech world, he advises to carefully select the values we embed in our algorithms. Rather than serving perpetual growth, technologies ought to help people reconnect with each other and their physical surroundings.
In our augmented world, many decision situations are designed by smart technologies. Artificial intelligence helps reduce information overload, filter relevant information and limit an otherwise overwhelming abundance of choices. While such algorithms make our lives more convenient, they also fulfill various organizational objectives that users may not be aware of and that may not be in their best interest. We do not know whether algorithms truly optimize the benefits of their users or rather the return on investment of a company. They are not only designed for convenience but also to be addictive, and this opens the doors for manipulation. Therefore, augmented decision making undermines the freedom of choice. To limit the threats of augmented decisions and enable humans to be critical towards the outcomes of artificial intelligence–driven recommendations, everybody should develop “algorithmic literacy.” It involves a basic understanding of artificial intelligence and how algorithms work in the background. Algorithmic literacy also requires that users understand the role and value of the personal data they sacrifice in exchange for decision augmentation.
In a recent survey, about 900 “Leaders of Tomorrow” from more than 90 countries all over the world shared their opinions about “the impact of new technologies on human freedom of choice.” They take a very clear stance against unlimited freedom of speech on the Internet. The majority thinks platforms that until now have often taken a “hands off” approach, rejecting content filtering by claiming they are “just the messenger,” should be obliged to prevent and censor hate speech and fake news on the Internet. Platforms are expected to work hand-in-hand with state institutions to better prevent online manipulation and abuse and to protect personal data. The Leaders of Tomorrow also advocate that personal data should be controlled by their owners when they are used by online platforms. Applications that lack transparency and cannot be influenced by the customer are met with the highest extent of objection.
Our relationship to technology is deeply paradoxical. On the one hand, we buy and constantly use more devices and apps, leaving our traces in the digital space. On the other hand, we increasingly fear the dark sides of being dependent on technology and of data abuse. Inadequate knowledge and errors make it difficult to predict unintended consequences, and often problems emerge due to deliberate choices to pursue some interests while ignoring others. Hot topics include data privacy, potentially biased or discriminating algorithms, the tension between free choice and manipulation, and the optimization of questionable outputs while ignoring broader effects.
Fighting unintended consequences should get to the roots of the problems. As for personal data, users should get more control over what they share. Further, more transparency can help avoid dystopian outcomes. It concerns the use of data, in particular, by algorithms. The high concentration of power of a few global players should also be watched closely, and societies need to be critical towards their actions and objectives. Even seemingly noble motives come at a price, and this price needs to be negotiable.
Automated and personalized interactions may increase the relevance of marketing offers, but they also have less positive economic and psychological consequences for consumers. Machine learning-based prediction algorithms can approximate individuals’ preferences and their willingness to pay at ever greater levels of precision, and companies can use this knowledge to charge higher individual prices. Typically, consumers freely hand over all the information necessary to reveal their preferences and it seems that they underestimate the value of their personal data. And there is another discomforting aspect of giving away personal data. It means giving up privacy and as a result loosing autonomy.
Preventing negative outcomes is typically a task for regulators but finding solutions can be difficult. Therefore, companies need to address consumer concerns in their policies as well. To avoid dystopia, managers need to take consumer psychology into account and resist the temptation to maximize short-term profits at the cost of consumers. Avoiding marketing dystopia is in the best interest of all market participants – at least with a longer-term perspective.
Some algorithms may have similar discriminatory tendencies to humans. The presented study investigates gender bias in social media advertising in the context of STEM careers. Results suggest that advertising algorithms are not gender-biased as such, but that economic forces in the background might lead to unintended, uneven outcomes. Spillover effects across industries make reaching some consumer segments more likely than others. A gender-neutral strategy is less likely to reach women because women are more likely to react to advertising. Therefore, targeting them is more expensive and economic forces unintentionally favor men. One potential solution could be running separate campaigns for men and women to target both demographic groups equally. However, anti-discrimination legislation in many countries does not allow companies to target employment ads to only one gender. So ironically, laws that are designed to avoid discrimination actually rule out a fairly simple way to correct the bias in online targeting on Facebook and other platforms, illustrating further need for policy guidance in this area.
In the 2016 presidential election, the vast majority of available polls showed a comfortable lead for Hillary Clinton throughout the whole race, but in the end, she lost. Campaign managers could have known better, if they had had a closer look at other data sources and variables that – like polls – show voter engagement and preferences. In the political arena, donations, media coverage, social media followership, engagement and sentiment may similarly indicate how well a candidate is doing, and most of these variables are available for free.
Validating the bigger picture with alternative data sources is not limited to politics. The latest marketing research shows that online-consumer-behavior metrics can enrich, and sometimes replace, traditional funnel metrics. Trusting a single ‘silver bullet’ metric does not just lead to surprises, it can also mislead managerial decision-making. Econometric models can help disentangle a complex web of dynamic interactions and show immediate and lagged effects of marketing or political events.
Even the dark web has its bright sides because it can be used as an unregulated testbed for technologies that will eventually appear on the surface. Also, it is a useful place to study consumer privacy and have a view of what the surface world might look like under an extreme level of consumer data protection. In such a world, even our best customers might look like never-before-seen individuals, until they decide to reveal themselves. If there is trust, and a worthwhile value exchange, consumers might be willing to share their data and not enact all of the hyper-privacy available to them. To seize the opportunities, companies should take stock of their customer relationships, hone their data needs, and learn what information is critical, advantageous, or irrelevant for their context. They should implement initiatives that drive choice carefully, in a trustful relationship.
The progress of artificial intelligence and new technologies triggers hot debates about the future of human life. While fans of the singularity say that artificial intelligence will become smarter than human beings and should take over the world, for others, such a vision is a sheer nightmare. Douglas Rushkoff is clearly part of the second group and takes a passionate pro-human stance. He explains why giving too much way to technologies is a mistake and why humans deserve a place in the digital future. Already today, technologies have a much stronger impact on our lives than most of us would believe. For him, being human is a team sport, and he asks for a more conscious use of technologies while keeping rapport with other people. To safeguard the humanness in a tech world, he advises to carefully select the values we embed in our algorithms. Rather than serving perpetual growth, technologies ought to help people reconnect with each other and their physical surroundings.
In our augmented world, many decision situations are designed by smart technologies. Artificial intelligence helps reduce information overload, filter relevant information and limit an otherwise overwhelming abundance of choices. While such algorithms make our lives more convenient, they also fulfill various organizational objectives that users may not be aware of and that may not be in their best interest. We do not know whether algorithms truly optimize the benefits of their users or rather the return on investment of a company. They are not only designed for convenience but also to be addictive, and this opens the doors for manipulation. Therefore, augmented decision making undermines the freedom of choice. To limit the threats of augmented decisions and enable humans to be critical towards the outcomes of artificial intelligence–driven recommendations, everybody should develop “algorithmic literacy.” It involves a basic understanding of artificial intelligence and how algorithms work in the background. Algorithmic literacy also requires that users understand the role and value of the personal data they sacrifice in exchange for decision augmentation.
In a recent survey, about 900 “Leaders of Tomorrow” from more than 90 countries all over the world shared their opinions about “the impact of new technologies on human freedom of choice.” They take a very clear stance against unlimited freedom of speech on the Internet. The majority thinks platforms that until now have often taken a “hands off” approach, rejecting content filtering by claiming they are “just the messenger,” should be obliged to prevent and censor hate speech and fake news on the Internet. Platforms are expected to work hand-in-hand with state institutions to better prevent online manipulation and abuse and to protect personal data. The Leaders of Tomorrow also advocate that personal data should be controlled by their owners when they are used by online platforms. Applications that lack transparency and cannot be influenced by the customer are met with the highest extent of objection.