cover

 

 

Should we ban war robots or are they something we might want? What can computer games tell us about our morals? Is it OK to love a robot? What is ethical design in the digital world?

How do we need to regulate the algorithms that impact our lives? The digital transition puts our ideas about morality to the test, presenting us with new questions in all areas of life: politics, economy, social life, communication, entertainment. In twenty contributions, experts from Europe, America and Asia rise to the challenge of finding answers to some of the new issues confronting us.

The authors offer new perspectives on topics like robots for eldercare, autonomous vehicles, personal drones or data ethics. They present their ideas on how we, as a society, can deal with the digital challenges to our ethics and values. Their contributions provide insights into highly topical reflections on what is morally right in our digital era. Above all, they are an invitation to think and to join the discussion.

cover

Contents

The reinvention of ethics is our job!
Introduction by Philipp Otto and Eike Gräf

Digitalization as ethical challenge
Interview with Rafael Capurro

The Mangrove society
Sharing the infosphere with artificial agents

Luciano Floridi

Sex robots and robot sex from an ethical perspective
Oliver Bendel

Necessary algorithms
Thoughts on the new techno-political conditions for cooperation and the collective

Felix Stalder

What does ethical design look like in the age of emotional malware?
Fake news, machine learning, and creating user transparency in an age of user mistrust inside large scale networks.

Caroline Sinders

Personal drones and value sensitive design
David G. Hendry

The doctor will not see you now
The algorithmic displacement of virtuous medicine

Brent Mittelstadt

Building ethical robots for eldercare
Susan Leigh Anderson and Michael Anderson

The need for moral algorithms in autonomous vehicles
Ryan Jenkins

Terminator ethics: Should we ban “killer robots”?
Jean-Baptiste Jeangène Vilmer

Death, violence, sex: The matter of morals in games
Stephan Petersen and Benedikt Plass-Fleßenkämper

Skinner boxes all the way to the singularity
Tom Chatfield

The withering of freedom under law?
Blockchain, transactional security and the promise of automated law enforcement

Karen Yeung

Anthropomorphobia
Exploring the twilight area between person and product

Koert van Mensvoort

Data ethics
Developing a new business ethics

Gry Hasselbalch and Pernille Tranberg

The internet is not gender neutral
On gender ethics in internet public spaces

Hu Yong

Internet access as a human right—a step towards equality?
Interview with Kosta Grammatis

Data protection and ethics
Giovanni Buttarelli

What digitalization means for the future of our energy systems
Rafaela Hillerbrand, Christine Milchram, Jens Schippl

Of robots and humans—where is the real danger?
Interview with Kate Darling

Contributors

Editors

Notes

Imprint

The reinvention of ethics is our job!

Introduction by Philipp Otto and Eike Gräf

Our world has changed. Digitalization profoundly changes how we work, play, live, communicate and relate to each other. Almost every aspect of life nowadays has some digital element to it, in order to make it faster, more efficient, more sustainable, to automatize it or to plan ahead. We collect data about what we are doing, analyse it, create statistics, search for correlations and causalities, and subsequently adapt our future behaviour. Smart energy grids give detailed information about the energy consumption of cities, neighbourhoods, streets and individual homes; fitness trackers log our physical activities; browsers analyse our online history, social networks look for hints for depressions; navigation apps propose the best itinerary; cars warn their drivers when they show signs of fatigue. Sometimes we use the information ourselves to adapt our own behaviour, sometimes others use it to change their interactions with us and sometimes our collective data are used for decision making about larger political or commercial projects. Based on digitally generated and transmitted information—whether true or false, reliable or uncertain, revealing or misleading—we organize our social systems or other complex processes and situations. Wrapped in procedures of automated processing, the data we collect and analyse may directly influence our behaviour or have an impact on our scope of action. Experts caution that recommendations based on data analytics often are not as neutral and objective as they may seem, but incorporate a lot of biases and injustices that exists in our societies, often without us being aware of it. Furthermore, automation of processes may hide decisions (literally in the backend of our information processing systems) that we otherwise actively have to make and subject to our conscience. Nowadays, it can be very tempting to justify a morally difficult decision with “what the data suggested” would be the right thing to do.

Ethics is the subdivision of philosophy that addresses the principles and the evaluation of human conduct. Ethics is not something that we can consider or ignore based on our mood. Our value system is always present in our actions—consciously or unconsciously. At the same time, much care and profound reflections are needed to make explicit and justify what is ethically desirable and what is not. The digital transformation brings a lot of opportunities and pitfalls in this regard—and it puts many questions in front of us.

Should we ban killer robots?

Why should we treat our digital environment with respect?

How shall drones and other agents of the Internet of Things behave among us?

Based on what criteria do robots determine how to act in morally complicated situations?

Is it okay to love a robot?

Is artificial intelligence a menace to humankind?

What should we require of the algorithms that shape our lives?

Do we want to automate our rules and their enforcement?

In what respects do we need an option to break the rules?

We should address these questions properly in order to build the right environments that allow us to lead moral lives. And we need to define together what constitutes such a moral life in the digital society. Whenever we tackle decisions with an ethical dimension, we shouldn’t lose the connection to the respective context over our occupation with numbers and other data. The moral value of any decision depends on the context it was taken. This context can rarely be reduced to a set of data. Instead of relying more and more on data to find out how people behave, we should invite the people and communities that are affected by a given question to enter into a dialogue about what the right thing to do would be.

Ethics can help to find solutions for social problems and challenges. It can also guide us how to shape our tools and processes, and even before that, it can help us reflect on the situations that we encounter as well as on our conduct as individuals and as groups. In that respect, ethics can be part of a very political process. It can and should be the basis for our rules and laws. And we need new rules in order to come to terms with the novelties of the digital world. The rules that currently govern our lives, our laws and customs, have for the greatest part been developed in a time before the internet. The contexts, the situations, and our lives in general have changed.

We chose twenty topics that keep ethicists, researchers of various disciplines, politicians and citizens busy, and of which we think that they are worthy to be vividly debated. We invited different experts each to address one of those topics with their own ideas and perspectives. We asked them to present their personal stance on the respective issue, to point out provocative observations, or to make a claim that sparks a debate. And they did. We received inspiring contributions from a very diverse set of authors, coming from different backgrounds, disciplines, and countries, with very different writing styles and ways of reasoning. We thank all the contributors for their valuable work, for the thoughts that they gave to their specific topics and for their kindness and helpfulness in the creation of this book.

We believe that the topics in this collection are important for our future. And we hope that the contributions will gain traction and become part of a larger discussion that goes beyond the realms of academia and other expert communities. The world is changing at an incredibly rapid pace. Wherever reality challenges our current system of values, ethics becomes increasingly important to guide our actions. We think that everybody should be invited to take part in the deliberations about ethics for the digital age.

We have to define very specific contexts to morally evaluate actions, procedures, applications, organizations and systems and to come up with ethically desirable ways to shape them. Sometimes, these are very new situations. Sometimes, they are just the same ancient trade offs and dilemmas as hundreds of years ago, only hidden under a layer of technology. The personalization of online content for instance makes it increasingly difficult to tell whether a recommendation is in the best interest of a person or whether it represents a morally reprehensible attempt to manipulation. We need to fully understand what we are facing before we can take a moral stance towards an issue. We hope that this book will be useful in getting there.

Digitalization as ethical challenge

Interview with Rafael Capurro

Mr. Capurro, digitalization encompasses—and changes—our everyday lives more and more. Does it also affect our understanding of ethics, developed as it has over centuries? Has something like digital ethics already emerged?

If you understand ethics to be our diehard conventions and practices, then the answer to this question is yes. Those behaviours characterized in Latin as “mores” are not carved in stone, but have over time constantly been subjected to change. We adapt them when our living conditions change.

Given modern-age changes in society, economics, politics, art, science and engineering, the digital revolution is comparable to that which began in 16th century Europe, reaching a summit with the 19th century industrial revolution, and continuing with the scientific, technical, political, economic and cultural upheavals of the 20th century. This change deeply affected the European self-image. No longer was God at the centre of things, but rather the human being. The human being was no longer in God’s image, but was instead an autonomous, self-defining, world-shaping being. He understood himself as a subject relative to everything (himself included) becoming an object of empirically quantifiable research. With this, the ambivalent victory march of modern technology took its course. The subject-object dichotomy shaped the modern European, and made the scientific and technical conquest and exploitation of the world possible. This was not just about using and exploiting nature on the basis of technical innovation, but rather how to rule all peoples politically, economically and culturally. For this, the framework of moral conceptions assumed from the ancient world to the middle ages—and the forms of legal and political legitimation based on them—had to be altered. How are morals possible without religion? How does state violence legitimize itself where there’s no longer a king by God’s grace? How is the relationship between state and church to be understood without the unity of throne and altar? What are the possibilities and limitations of action for the European man when there are no unquestionable, dogmatically decreed commandments and prohibitions from a higher authority? On the basis of what processes and through what institutions are they determined, justified, evaluated and carried out?

In the last twenty years, digitalization and in particular the digital connection of the world have brought about a new anthropological and cultural global revolution that has spread with breath taking speed. Just as the modern Europeans constituted themselves as subjects, so do we (but who are “we”?)—paradoxically expressed—understand ourselves as networked subjects and objects. Through this, questions regarding freedom and autonomy are altered, as indeed they were altered in modern times relating to an independence from state and church heteronomy, and the heteronomy posed by a deterministically understood nature.

Digital ethics in the sense of a critical reflection about a good life in a digitally shaped world had indeed already emerged in the 1940s. At that time, one spoke of computer ethics, often meaning a professional ethics for computer scientists—although it was clear that it in fact concerned the total societal impact of computer technology. The expression “digital ethics” is of course somewhat more recent. I have used it since 2009, with the Institute for Digital Ethics at the Stuttgart University of Media Studies beginning its work in 2014.

The debate concerning information ethics is not merely an academic one, however.

No, the ever-increasing media reports show that there is a total societal discussion underway because digitalization charts and changes our way of life both locally and globally. The expression “ethics” should however be reserved for the philosophical discipline whose object is mores in the sense of experienced customs and habits. Otherwise there is the danger that one confuses the reflection with its object. Well-known examples of this are the economic sciences and the economy. The task of ethics is the problematizing of morals.

By definition, ethics should assist us in making moral decisions. How well does this function in an ever-faster, ever-more opaque digital world?

The manner of assistance that ethics, that is ethical theories and analyses developed in different cultures and epochs, can provide is that of elucidating modes of action and their effects on actors in their particular world. The rich diversity of ethical reflections over two-and-a-half thousand years of Western history show how complex this discipline is. It also demonstrates how different the elucidations are when they uncover prejudice, when they problematize a seemingly clear concept, when they analyse different courses of action and their repercussions, when they are responsive to different perspectives other than that of one’s own language and culture. Above all, they do not lessen the responsibility of the actors when it concerns their moving in one direction or the other, as well as the weighing up of advantages and disadvantages for themselves as well as for other animals (and for nature too).

It is critical that ethics does not lessen anyone’s responsibility to go their own way, or influence what advantages and disadvantages their actions have for their own lives and the lives of others. It does not give you a free pass. This is even more crucial when the relations defining everyone’s life change so rapidly that morals and legal norms suddenly exhibit dysfunctional traits.

When our assumptions and agreements regarding the good life become problematic due to technical or social changes, then it is time for an ethical reflection that provides both the society and the legislator with food for thought. When we’re ill, we are happy to have medical research, good doctors and hospitals to recommend how we should live our lives differently. If we no longer know how to manage the digitazed life, we need well-informed ethical research. You can’t get that at the touch of a button. You need time to think.

Let’s talk about privacy: what are the consequences when businesses, employers, friends and relatives know everything about us? Is our freedom in danger? What are the downsides of too much surveillance?

It always depends on how we define “us”, and what is meant by “our freedom”. Privacy is neither a relict from civilized society, nor is it a hopeless fight against windmills in a restless and consistently data-hungry information society. Rather, it is the relationship between public and private in the sense of an open social game of hiding and revealing oneself. I see this as a constant (with many variations) of all human cultures that is worth analysing in more detail.

We desperately need global rules of fair play for the digital networking of the world, as well as corresponding national and international agencies to ensure compliance with these rules. This can’t happen on a “once and for all” basis because technical developments always throw up new questions. We need different modes and institutions of political dialogue and accompanying academic research centred on ethical and legal questions of the information society. When we speak of private and public, nothing less than the definition of human freedom in the 21st century is on the line.

Today, we post and tweet around the clock. Is the intensive use of social networks and online platforms an expression of the human requirement for recognition?

Without a doubt. But it’s not just that. It appears to me that the opportunities provided by social networks and online platforms often give rise to narcissism, exhibitionism and voyeurism. It’s also very clear that interactive media has given us possibilities of self-expression that formerly didn’t exist. But today’s reality of social media and online platforms is more complex. This is not just due to the use and misuse of personal data, but also to different moral and legal norms that inform these media and their dependence on political, economic, legal and cultural interests and frameworks.

Through analysing the ambivalence of social media in an African context, I established that these networks follow an ethical imperative: uninterrupted communication for all, always! This imperative is only effective when carriers promise not to share (without consent) personal data with third parties—something that, at the latest, Edward Snowden disproved. This apparently total communication leads paradoxically to a situation defined by the American sociologist Sherry Turkle as “alone together”. Human freedom means the freedom to hide or to reveal who we are. One imperative of the “total reveal” is just as inhuman as the “total concealment”.

Obviously, imagining an online life as being separate from physical existence leads to pathological situations. It is very similar to being dependent on drugs. We need ethical and medical research concerning the pathologies of the information age.

According to this, our online life is not separate from the physical world?

It appears evident to me that we’re not talking about two worlds here—one physical and one digital. The digital world network shapes life in the physical world to an ever-larger extent. As an example, just think about the internet of things—TV sets, radios, fridges, surveillance cameras connected to the internet and forming their own networks.

Through the digital interconnection of physical things, they are not what they were before their digital networking. When the ontological mode of things changes, we also change ourselves. From the modern human subject, opposite whom objects or subjects stand, we now have digitally connected humans and things, both living and non-living. Basic questions have to be reconsidered: what do autonomy and heteronomy mean? How has the relationship between private and public changed? What are the effects of digitalization on the environment (think about electronic waste)? What new possibilities arise for a transformation of democracy in the 21st century? What does education and training mean in the digital age? In short, we’re looking to answer the question: what is digital enlightenment?

Is there a possibility that the increasing connectedness of people with smartphones somehow resembles the opening of Pandora’s Box?

Today, life (work, production, action) increasingly takes place in hybrid form. It is often vitally important to take time out from this permanent state of availability—both in your private and working life—and to create digital-free spaces and times. Being digital is one form of being “in the world” and not one that is divided from the physical world. I don’t believe that all evil can be found in a box, or—as the myth supposedly has it—in a jar. Or if it is, we mean its impact on the open and ambivalent possibilities of human action. Neither smartphones nor machine-to-machine communications are inherently good or bad. These properties always arise in social and historical context—therefore they are properties of the second order.

You’re talking about the ‘robotization’ of humans. Where do you currently see the greatest perils coming from?

We should bear in mind who we are talking about: who is deciding that robots should leave factory floors and enter into the everyday life of people. How and why are robots deployed in the household, hospitals, hotels, care homes, crèches, restaurants, schools or universities, and upon what ethical and legal criteria is their deployment evaluated? Their success or failure depends, amongst other things, on considerations regarding current customs and habits. That is to say, on the mores just as much as the laws. The formulations “robotization of the human” and “greatest perils” is evidence that a European-Western perspective informs these questions. They become economically relevant when questions of trust or mistrust regarding potential buyers in the so-called West hangs in the balance. In Eastern cultures, such as Japan, things are different. In Japan, the history of robotics is connected to toys and marionettes as well as the Buddhist conception of the self. The European modernism also has a tradition of a more playful perspective on machines—one can consider the mechanized devices in Renaissance European courts. Nevertheless, the modern European human is characterized by a strict divide between subject and object. The self-determination that came from Europe and spread all over the globe formed an immunization against mechanization and robotization, particularly stressing the protection of human rights. Now, in an epoch of digital world networking, this European anthropocentrism has suddenly come into question.

You haven’t quite answered my question yet: is robotics dangerous or not?

Well, the current debates on robotics take place on topics like the deployment of war drones or ever-increasing public surveillance. The balancing of the interests between freedom and security can not offset the fundamental ambivalence of the use of such technologies. In everyday life the question is, how far, and on what grounds will I delegate my freedom and self-responsibility to an algorithm? When and for whom is this heteronomy a good decision? When should I do without the help of algorithms and take decisions into my own hands? The ethical consideration of whether a technology like robotics burdens or unburdens individual and social freedoms is anything but trivial. We need to think deeply about it.

Car manufacturers all over the world are working intensively on autonomous driving. This is a further great push towards networking for us, and one that also raises ethical questions. Who should make sure that a corresponding discourse gets underway?

Digital networking raises the question: what exactly is a 21st century automobile when we talk about autonomous driving? It’s about built-in norms, one’s mobility, and that of others, and all of this must eventually be in harmony.

Norms and rules don’t just fall from the sky. They have formed and changed corresponding to the available means of transport or geographic conditions. Some people talk about programming ethical rules into autonomous cars. But then, they would not be autonomous, because others give the rules governing their behaviour. Using the term autonomy in this case stands in contradiction to the influential concept of autonomy in European modernity. For example, Kant defined autonomy in terms of freedom being the core of human dignity. Ethics in the sense of a critical reflection about morals doesn’t allow for programming, of course. We can only turn fixed rules and laws into algorithms. So now we have the task of interpretation. That is something that an autonomous car following set rules cannot accomplish. Not these tools, but rather their manufacturers, programmers and buyers have a moral and legal dilemma that is very difficult to solve.

Do you have any advice at hand?

In the poem An die Deutschen (To the Germans) the German poet Hölderlin writes: “Oh you good people! We too are lacking in deeds and full of thought!” I wish we were more contemplative. The strict division of the philosophical world from politics and the economy is ominous.

What equipment is needed by schools and educational institutions of the digital natives generation to ensure it’s not just technology that shapes their world?

In a globalized world, foreign languages are indispensable. Only through learning another language can you relativize your own worldview, visit those of others, and learn directly from them. The history of science has alerted us to the openness and changeability of theories and concepts. From the history of technology, we can learn how and why something doesn’t work. Not merely because a machine is broken, but also because the foundational idea and its implementation will show cracks. When this is taught in correspondence with recent IT history, a light bulb may go off in the heads of a few students as to what “inventive talent” is. One would then learn to understand history from a future perspective. Regardless of the sources used by students and how they share their knowledge with others, it is important to have an open discussion. We don’t even have to mention the word ethics. The slogan is this, however: learn for the future and support those who are socially disadvantaged. Ethical questions are always questions about life.

Translated from German by David Meeres.

A shorter version of this interview with Hilmar Dunker and Ralf Bretting appeared in the magazine “Business Impact – Digitale Wirtschaft” (4/2015); www.businessimpact.eu.

The Mangrove society

Sharing the infosphere with artificial agents

Luciano Floridi

The car industry has been at the forefront of the digital revolution since its beginning, first with industrial robotics, and now with AI-based driverless cars. The two phenomena are related and can teach us a very important lesson when it comes to understanding how human and artificial agents will cohabit our new world.

Consider industrial robotics first, e.g. a robot that paints a vehicle components in a factory. The three-dimensional space that defines the boundaries within which such a robot can work successfully is defined as the robot’s envelope. Some of our technologies, such as dishwashers or washing machines, accomplish their tasks because their environments are structured (enveloped) around the elementary capacities of the robot inside them. We do not build droids like Star Wars’ C-3PO to wash dishes in the sink exactly in the same way as we would. Instead, we envelop micro-environments around simple robots to fit and exploit their limited capacities and still deliver the desired output.

Enveloping used to be either a stand-alone phenomenon (you buy the robot with the required envelope, like a dishwasher or a washing machine) or implemented within the walls of industrial buildings, carefully tailored around their artificial inhabitants. Nowadays, and we come to the second point, raised by driverless cars, we are enveloping whole environments into a technology-friendly infosphere. When we speak of smart cities what we really mean is that we are transforming social habitats into places in which robots can operate successfully.

Enveloping has started pervading all aspects of reality and is visible everywhere, on a daily basis. We have been enveloping the world around information and communications technologies (ICT) for decades without fully realizing it. In the 1940s and 1950s, the computer was a room and Alice used to walk inside it to work with it. Programming meant using a screwdriver. Human–computer interaction was as a somatic (body-related) relation. In the 1970s, Alice’s daughter walked out of the computer, to step in front of it. Human–computer interaction became a semantic (meaning-related) relation, later facilitated by DOS (Disk Operating System) and lines of texts, GUI (Graphic User Interface), and icons. Today, Alice’s granddaughter has walked inside the computer again, in the form of a whole infosphere that surrounds her, often imperceptibly. We are building the ultimate envelop in which human–computer interactions have become somatic again, with touch screens, voice commands, listening devices, gesture-sensitive applications, proxy data for location, and so forth. As usual, entertainment and military applications are driving innovation, but the rest of the world is not lagging much behind. If driverless vehicles can move around with decreasing trouble, if Amazon will soon deliver goods through a fleet of unmanned drones, this is not because real AI applications have finally arrived—robots that think, understand or feel like you and me or better—but because the “around” they need to negotiate has become increasingly suitable for AI and its very limited capacities.

Enveloping the world by transforming a hostile environment into a digitally friendly infosphere means that we are going to share our habitats not just with forces of nature and animals, but also, and sometimes primarily, with artificial agents. This is not to say that true artificial agents are in view. We do not have semantically proficient technologies, agents that understand things, or worry about them, of feel passionate about something. So the false Maria in Metropolis (1927), Hal 9000 in Space Odyssey (1968), C-3PO in Star Wars (1977), Rachael in Blade Runner (1982), Data in Star Trek: The Next Generation (1987), Agent Smith in The Matrix (1999) or the disembodied Samantha in Her (2013) are and shall remain mere sci-fi. The deeper point is that the failed arrival of AI does not matter. There are so many data, so many distributed ICT systems communicating with each other, so many humans plugged in, such good statistical and algorithmic tools, that purely syntactic technologies can bypass problems of meaning and understanding, and still deliver what we need: a translation, the right picture of a place, the preferred restaurant, the interesting book, a good song that fits our musical preferences, a better priced ticket, an attractively discounted bargain, the unexpected item we did not even know we needed, the correct interpretation of a radiography, and so forth. They are as stupid as an old fridge, and yet our smart technologies play chess better than us, park a car better than us, predict potential faults in an engine better than us … Artificial memory (as in data and algorithms) outperforms human intelligence in an increasing and boundless number of tasks. The sky is the limit, or rather our imagination in finding ways to develop and deploy our smart technologies.

So it turns out that some of the issues we are facing today—especially in e-health, financial markets, or safety, security, and conflicts—already arise within highly enveloped environments.

In the nineties it was still common to be asked whether one was online, or connected. Today, in many advanced information societies, the very question has become meaningless. Imagine being asked whether you are online by someone who is talking to you through your smart phone, which is linked up to your car sound system through Bluetooth, while you are driving following the instructions of a GPS, which is also downloading information about traffic in real-time. The truth is that we are neither online nor offline but onlife, that is, we increasingly live in that special space that is both analogue and digital, both online and offline. An analogy may help. Imagine someone asking whether the water is sweet or salty in the estuary where the river meets the sea. That someone has not understood the special nature of the place. Our information society is that place. And our technologies are perfectly evolved to take advantage of it, like mangroves growing in brackish water.

In an enveloped world, in the mangrove society, all relevant (and sometimes the only) data available are machine-readable, and decisions as well as actions may be taken automatically, through sensors, actuators, and applications that can execute commands and output the corresponding procedures, from alerting or scanning a patient, to buying or selling some bonds. Examples could easily be multiplied.

The consequences of enveloping the world to transform it into an ICT-friendly place are many, but one is particularly significant and rich in consequences. Humans may become inadvertently part of the mechanism. This is exactly what Kant recommended we should never do: treating humans a means rather than an end. Yet this is already happening, in two main ways.

First, humans are becoming a new means of digital production. The point is simple: sometimes our ICTs need to understand and interpret what is happening, so they need semantic engines like us to do the job. This fairly recent trend is known as human-based computation. A classic example is provided by Amazon’s Mechanical Turk. The name comes from a famous chess-playing automaton built by Wolfgang von Kempelen (1734  1804) in the late eighteenth century. The automaton became famous by beating the likes of Napoleon Bonaparte and Benjamin Franklin and putting up a good fight against a champion such as François-André Danican Philidor (1726  95). However, it was a fake because it included a special department in which a hidden human player controlled its mechanical operations. The Mechanical Turk plays a similar trick. Amazon describes it as “Artificial Artificial Intelligence”. It is a crowdsourcing web service that enables so-called requesters, to harness the intelligence of human workers, known as “providers” or, more informally, “turkers”, to perform tasks, known as HITs (Human Intelligence Tasks), which computers are currently unable to perform. A requester posts a HIT, such as transcribing audio recordings, or tagging negative contents in a film (two actual examples). Turkers can browse and choose among existing HITs and complete them for a reward set by the requester. At the time of writing, requesters can check whether turkers satisfy certain qualifications before being allocated an HIT. They can also accept or reject the result sent by a turker and this reflects on the latter’s reputation. “Human inside” is becoming the next slogan. The winning formula is simple: smart machine + human intelligence = clever system.

Many workers are in no position to turn down a job, and the risk is that AI will only continue to polarize our societies—between haves and never-will-haves—if we do not manage its effects. It is not hard to imagine a future social hierarchy that places a few patricians above both the machines and a massive new underclass of plebs. Meanwhile, as jobs go, so will tax revenues; and it is unlikely that the companies profiting from AI will willingly step in to support adequate social-welfare programs for their former employees.

The second way in which humans are becoming part of the mechanism is in terms of manipulable customers. For the advertisement industry, a customer is an interface between a supplier and a bank account (to be precise one should speak of a “credit limit”; this is not just disposable income, because customers will spend more than they have, e.g. using their credit cards). The smoother the relationship between the two the better, so the interface needs to be manipulated. In order to manipulate it, the advertising industry needs to have as much information as possible about the customer-interface. Yet such information cannot be obtained unless something else is given in return to the customer. Enter “free” online services. These are the currency with which information about customers-interfaces is “bought”. The ultimate goal is thus to provide just enough “free services”—which are expensive—to obtain all the information about the customer-interface that is needed to ensure the degree of manipulation that provides, to the supplier, unlimited and unconstrained access to the bank account. Because of competition rules, such a goal is unreachable by any single operator, yet the common effort of the advertising and supplying industry means that customers are increasingly seen as means towards an end: bank account interfaces to be pushed and pulled, nudged and enticed.

Every day sees the availability of more tags, more humans online, more documents, more tools, more devices that communicate with each other, more sensors, more RFID tags, more satellites, more actuators, more data collected on all possible transitions of any system, more algorithms, more smart objects … in a word, more enveloping. Using the previous analogy, the estuary is expanding very quickly, and more and more people live onlife, in its brackish waters that are the natural habitat for our digital technologies. All this is good news for the future of AI applications in general. They will be exponentially more useful and successful with every step we take in the expansion of the infosphere. It has nothing to do with some sci-fi catastrophe. For it is not based on some speculations about some super AI taking over the world in the near future. These scaremongering stories are utterly unrealistic as far as our current and foreseeable understanding of AI and computing is concerned. No artificial Spartacus will lead a major ICT uprising. Sci-fi scenarios are also irresponsible because they distract from the real issues that we need to tackle. For we have seen that enveloping the world is a process that raises some serious challenges. A parody may help synthesize them.

Two people A and H are married and are committed to making their relationship work. A, who does increasingly more in the house, is inflexible, stubborn, intolerant of mistakes, and unlikely to change. H is just the opposite, but is also becoming progressively lazier and dependent on A. The result is an unbalanced situation, in which A ends up shaping the relationship and distorting H’s behaviours, practically, if not purposefully. If the marriage works, that is because it is carefully tailored around A. Now, smart technologies play the role of A in the previous analogy, whereas their human users are clearly H. The risk we are running is that, by enveloping the world, our technologies might shape our physical and conceptual environments and constrain us to adjust to them because that is the best, or easiest, or indeed sometimes the only, way to make things work. After all, smart machines are the stupid but laborious spouse and humanity the intelligent but lazy one, so who is going to adapt to whom, given that divorce is not an option? The reader will probably recall many episodes in real life when something could not be done at all, or had to be done in a cumbersome or silly way, because that was the only way to make the computerized system do what it had to do.

What can be done about such risks? By becoming more critically aware of the environment-shaping power of our digital technologies, we may reject the worst forms of distortion. Or at least we may become consciously tolerant of them, especially when it does not matter or when this is a temporary solution, while planning a better design. In the latter case, imagining what the future will be like and what adaptive demands technologies will place on their human users may help to devise technological solutions that can lower their anthropological costs and raise their environmental benefits. In short, human intelligent design (pun intended) should play a major role in shaping the future of our interactions with each other, with any forthcoming technological artefacts, and with the infosphere we share among us and with them. After all, it is a sign of intelligence to make stupidity work for you.

For some time, the frontier of cyberspace has been the human/machine divide. Today, we have moved inside the infosphere. Its all-pervading nature also depends on the extent to which we accept its digital nature as integral to our reality and transparent to us, in the sense of no longer perceived as present. What matters is not so much moving bits instead of atoms—this is an out-dated, communication-based interpretation of the information society that owes too much to mass-media sociology—as the far more radical fact that our understanding and conceptualization of the essence and fabric of reality is changing. Indeed, we have begun to accept the virtual as partly real and the real as partly virtual. The information society is better seen as a neo-manufacturing society in which raw materials and energy have been superseded by data and information, the new digital gold and the real source of added value. Not just communication and transactions then, but the creation, design, and management of information are the keys to the proper understanding of our predicament and to the development of a sustainable infosphere. Such understanding requires a new narrative, that is, a new sort of story we tell ourselves about our predicament and the human project we wish to pursue. This may seem an anachronistic step in the wrong direction. Until recently, there was much criticism of “big narratives”, from Marxism and liberalism to the so-called end of history. But the truth is that such a criticism, too, was just another narrative, and it did not work. A systematic critique of grand narratives is inevitably part of the problem it tries to solve. Understanding why there are narratives, what justifies them, and what better narratives may replace them, is less juvenile and a more fruitful way ahead. ICTs are creating the new informational environment in which future generations will spend most of their time. Previous revolutions in the creation of wealth, especially the agricultural and the industrial ones, led to macroscopic transformations in our social and political structures and architectural environments, often without much foresight, normally with deep conceptual and ethical implications. The information revolution—whether understood in terms of wealth creation, or in terms of a reconceptualization of ourselves—is no less dramatic. We will be in serious trouble if we do not take seriously the fact that we are constructing the new environments that will be inhabited by future generations. In view of this important change in the sort of ICT-mediated interactions that we will increasingly enjoy with other agents, whether biological or artificial, and in our self-understanding, an ethical approach seems a fruitful way of tackling the new challenges posed by ICTs. It must be an approach that does not privilege the natural or untouched, but treats as authentic and genuine all forms of existence and behaviour, even those based on artificial, synthetic, hybrid, and engineered artefacts. The task is to formulate an ethical framework that can treat the infosphere as a new environment worthy of the moral attention and care of the human informational organisms (informs) inhabiting it. This suggestion should not be misunderstood for a recommendation to treat artificial agents as if they were people, who deserved some kind of moral respect because they have some kind of dignity. Any call for “robots’ rights” must be taken with some humour and a smile. We may as well talk about “dishwashers’ rights”. We must avoid any anthropocentric and anthropomorphic trap. What the suggestion does mean is that we should exercise towards AI and smart technologies the same respect we exercise towards trees and ancient artefacts: they are all expressions of a whole reality that, by default, should be seen as valuable in itself, intrinsically, and therefore not to be abused, vandalized, destroyed, or diminished wantonly.

The most fruitful way of understanding our inclination to develop and safeguard our digital technologies, including AI, is in terms of an e-nvironmental ethics for the whole infosphere. This means putting ourselves as moral agents at the periphery of the ethical discourse, while placing the receivers (patients) of our ethical actions at the centre. It is not an entirely new idea. In environmental ethics, sustainable development combines human development goals with the support of natural systems in continuing to flourish and provide the natural resources and ecosystem services upon which the economy and society depends. The concept is rooted in a variety of cultures and practices, but it was theorized and became a reference framework in 1987, thanks to the “Brundtland Report” (Borowy 2014). The infosphere is at the same stage at which the biosphere was before the Brundtland Report. We do not have yet, but urgently need, an ethical framework that can harmonize economic development, social development, and the ecological protection of the infosphere and its artificial inhabitants for current and future generations. We need to develop an information ethics based on social preferability (preferable developmentrecycling and minimizingreusingrepurposing