Why should machines make art?
Why should machines make art?
Or how can art help humanity to embrace technology without losing trust in its human condition.
Seeing the evolution of humanity over history, we have moved from the belief of a world that has been given to humans, either by gods or by chance, to a world designed by humans through technology. In the last century, with the development of computers subsequently on with the Internet, the speed of such a project had increased, raising concerns within society about the power relationships between humans and machines. Will technology make us better humans or will they convert us into a data-driven mass of zombies?
We owe so much to machines that often we start to see them as post-religious deities. Not like gods that punish our moral behavior, but who act as oracles, providing advice in most aspects of our life, searching for the most logical and convenient solutions to the problems we face. And we take this advice very seriously. We entrust them with information that we wouldn’t share with our closest friends, because we believe that machines can provide us with insightful advice that will save us time, hard thinking, or from having to make difficult decisions.
In the past, technology was predominantly a subject of speculation for science fiction writers. Now philosophers and policymakers think about it, too. If there’s one word to tag the subject, it’s Dataism.
Dataism
Dataism, a term made popular by historian Yuval Noah Harari, believes in the preeminence of information and algorithms over human instinct and decision-making. Why bother finding a good book to read or some nice music to listen to if software can make recommendations based on statistically driven calculations? Why think of a very special present for a friend if an online shop has tools and data to ascertain what would be the best surprise for him or her? In fact, why even decide who is my closest and most loyal friend if a well-nourished social media platform can make a founded estimation? And what about the person I want to settle down with? Or the career I want to build? Or the shares I want to buy, keep or sell on the stock exchange?
I am not a romantic, nor a ferocious defender of good old humanism who believes that humans should remain the center of the universe. Nonetheless, I do believe that dataism brings along a silent but outrageous threat: human self-distrust.
The individuality paradox
One of the most quoted philosophers in the 20th century is Friedrich Nietzsche. He is famous for quotes such as “God is dead”, or his repeated comparisons of humans to lambs and eagles, slaves and masters — the former being the masses who follow a given morality, and the latter being the soul searchers, those who are willing to face the solitude of their own existence in order to act according to their own ethics. Nietzsche dreamed of a future in which a new kind of human would be born. A human capable of connecting with his inner will and remaining authentic to it. A human without self-distrust. He would call it the Übermensch, which in English translates to the “over man”, or superhuman.
This idea caused revolts in times when religion was still a powerful means to build community. One common argument to drop Nietzsche’s words into the dustbin of sacrileges was the subconscious fear that a lack of self-distrust weakens societies. How can a nation ever grow together if everyone is so heavily invested in the pursuit of their own interests and self-cultivation?
Paradoxically, our search for individuality hasn’t followed the path of Nietzsche’s superhuman. It has rather made us more similar to ants, endlessly meeting society’s needs while trying to define our true selves using a small set of overly optimistic buzzwords and mottos, often created by marketers.
Contrary to such belief, over the twentieth century we have seen that valuing individuality does bring progress to society. Such progress is far from perfect, but it has helped us to reduce violence and improve quality of life in general. Paradoxically, our search for individuality hasn’t followed the path of Nietzsche’s superhuman. It has rather made us more similar to ants, endlessly meeting society’s needs while trying to define our true selves using a small set of overly optimistic buzzwords and mottos, often created by marketers. Social Media is one of the ultimate tokens of this paradox: in order to excel as an individual, thou shalt find a unique way to embody the values dictated by the latest trend.
In a society where individuality is to a great extent defined by social pressures and where individuals’ lack of time, resources or education to find out their true wills, dataism is an appealing solution to agree on aesthetic and ethical matters. Which song to play during dinner? What’s the best beach outfit? What is the least evil: to endanger the life of an old lady living with chronic pain or kill yourself in a car accident? While aesthetic matters may seem trivial, ethical issues may raise more than an eyebrow. And while dataism doesn’t yet have the final answer to these questions, it does offer persuasive arguments. Over time, as more data accumulates and algorithms become better at predicting hard facts, it will become difficult not to follow them.
Erewhon: a world without machines
Researching past perspectives on the relationship between humans and human-made technologies, I discovered that the concern about self-distrust is not only today’s fright. Erewhon, a novel written in 1872 by Samuel Butler, expresses a very similar, if not the same, problem. It recounts the visit of a British traveler to Erewhon, an imaginary land whose inhabitants, afraid of being overthrown by their own creations, had long ago decided to destroy and ban all their industrial machinery.
Life would be easier, or at the very least less demanding, if we would be surrounded by machines enabling our world, keeping us like hamsters running inside a wheel to lose weight to then be fed again.
In order to make their decision endure for centuries, lawmakers of the time wrote a book called The Book of Machines. In the novel, we read extensive sections of this book through the eyes of the protagonist. The following fragment summarises its main argument against the proliferation of machines:
We treat our domestic animals with much kindness. We give them whatever we believe to be the best for them […]. In like manner there is reason to hope that the machines will use us kindly, for their existence will be in a great measure dependent upon ours; they will rule us with a rod of iron, but they will not eat us; they will not only require our services in the reproduction and education of their young, but also in waiting upon them as servants; in gathering food for them, and feeding them; in restoring them to health when they are sick; and in either burying their dead or working up their deceased members into new forms of mechanical existence.
Although it is true that the people in Erewhon were afraid of the possibility of being exterminated by their machines, The Book of Machines expresses a more philosophical concern. Unsurprisingly, it has to do with human self-distrust.
Life would be easier, or at the very least less demanding, if we would be surrounded by machines enabling our world, keeping us like hamsters running inside a wheel to lose weight to then be fed again. But, what happens to that thing humans used to sacrifice their lives for, known as free will?
My guess is that people in Erewhon were as concerned about their freedom as they were about the perpetuation of their nation. For them, the two things couldn’t be separated.
The Machine Stops: a dystopian take on machine intelligence
In 1909, E.M. Forster published a novella titled The Machine Stops. Unlike Erewhon, it describes a society where humans let themselves be ruled by one centralized, interconnected machine. In this world, people live in isolated bubbles where all the requisites for a comfortable life are provided. As a result, no one ever leaves their home.
In this perfectly functional universe, the Machine (that’s the name of the entity that rules this fictional world) also takes into consideration the social and individual needs of its human subjects. Every house comes with one or more teleconference devices installed, so individuals can always reach out to each other. The characters in the novel, most of them well-recognized intellectuals, keep themselves busy organizing meetings, lectures and discussions, all of them oriented toward the only theme allowed to be discussed: the grandeur of the Machine.
In fact, the occupation of all intellectuals in this society seems bound to affirm the Machine. Such voluntary cultification emphasizes the mediocrity, dormancy and conformism of the Machine’s subjects, whose freedom had been long ago self-amputated in order to adjust their existence to the whims and designs imposed by their creation.
The machine described by Foster presents a dystopian vision of what today can clearly be associated with a bottom-up approach to artificial intelligence, or rather machine intelligence, as I prefer to call it. Take machine learning as an example. Programmers develop an algorithm that learns from data provided by millions of users, and it uses this data to redefine its output. Think of a piece of image recognition software. Based on the opinion of millions of users, the program chooses to associate the word “hat” with the image of, let’s say, a cowboy hat. While there’s nothing wrong with such assertion, if this program makes a wrong judgment, and its outcome is linked to a greater body of software that makes more complex decisions, there is a very real chance that things go the wrong way. If a machine is too complex, composed of many organs, it can be very difficult to open it and isolate the leakage that needs to be repaired. In fact, it can be difficult to even realize that it is not working properly.
In The Machine Stops, things go the wrong way. The Machine starts to glitch and, instead of trying to fix it, people interpret these errors as omens to be deciphered. The sad truth behind this behavior is is that no one really knew how the machine really functioned, and yet it was, for many years, the organizing engine of their society:
Year by year [the Machine] was served with increased efficiency and decreased intelligence. The better a man knew his own duties upon it, the less he understood the duties of his neighbour, and in all the world there was not one who understood the monster as a whole. Those master brains had perished. They had left full directions, it is true, and their successors had each of them mastered a portion of those directions. But Humanity, in its desire for comfort, had overreached itself. It had exploited the riches of nature too far. Quietly and complacently, it was sinking into decadence, and progress had come to mean the progress of the Machine.
In The Machine Stops, Forster pictured the doomsday of dataism before dataism had been coined. He is very emphatic about the fact that it wasn’t algorithms that turned the human dream into a nightmare. It was humans themselves who, too far on their path to oblivion and self-distrust, were incapable of retaking the power they had once held over their future. At the end of the novella, humans die not because the machine kills them. They die because they forgot the basics of mammalian survival: how to shelter from the cold, how to hunt and gather food, and how to self-organize, without instructions. Not knowing their past nor their will, there was no future to envision.
Ethics follows aesthetics
Just as in Erewhon and The Machine Stops, the questions we are posing around technology today are not about technology itself, but about the future — we envision for our society. To what extent do we want to be ruled by machines?
We first embrace the aesthetics of a new technology, and then, often without earnest questioning, we accept its ethics.
Common answers to this question tend to focus on utopian or dystopian scenarios that reflect the ethical consequences of using machines one way or another. However, it is the seemingly harmless incorporation of new technologies in our daily life, like apps, small gadgets and novel technological services (from chatbots to self-driving cars) that swiftly opens space for their uncritical implementation in the realms of governance and politics. In other words, we first embrace the aesthetics of a new technology, and then, often without earnest questioning, we accept its ethics.
Think of the evolution of digital maps. In 2005, Google launched its first version of Google Maps, which had the same features of a paper map, with only a few advantages: it was zoomable and had the potential of indexing locations and coordinates on a digital database. Over time, Google maps started to incorporate new features, such as a direction planner, street panoramics, location sharing and real-time location tracking. These upgrades reflect a natural course of action and implementation if we think of the aesthetic principles of digital media and Internet culture, where beauty is directly connected to the multiple forms and uses that can derive from data. A beautiful app is one that shapes information into services that simply and effortlessly make human life easier. From this perspective, the evolution of Google Maps seems like a swift journey towards beauty. That said, the implementation of each of these new services needs to catch up with new ethical questions dealing with the authority we are yielding to machines, or — as in this case — to the small group of people who operate them.
While the incorporation of panoramic pictures of streets to the digital map seems reasonable and aesthetically enhancing, it raises ethical questions about privacy, copyrights and geopolitics. Which places deserve to be photographed and which don’t? Maybe a totalitarian regime doesn’t want to show how its people live. Or maybe a distant rural town in a poor country doesn’t seem profitable enough to warrant sending a Street View unit to capture its images.
If these questions seem like small ethical conundrums, we can think one level higher. The aesthetics of the digital map don’t end up on the digital map. We can see the same principles of effortless ubiquity being applied to self-driving cars. While the digital map shows you where to go, the ideal self-driving car takes you where you want to go, and it does so without you having to know anything about how to get there — if there’s traffic jam, if the route is safe, where to fill your tank, or if the car needs a checkup. That said, the aesthetic experience of effortless transportation comes with no less striking ethical concerns. And as we occasionally see in the press, they need to be addressed.
Why should machines make art?
Digital maps are not going to disappear, and self-driving cars, either at a slower or faster pace, will only increase in numbers. Unlike in Erewhon, it seems highly improbable that a group of revolutionary policymakers will end up banning machines from society. After all, who really wants that? While belligerent adoptions of dataist ideals are gaining more and more acceptance, what can we do to remain aware of the authority we are handing to machines?
As an experimental filmmaker, a plausible answer that comes to mind is that we should spend more time and effort exploring the potential uselessness of currently available technologies. Including this kind of interaction in our list of experiences with machines would certainly help us to get more familiar with them, and gain a better understanding of their true nature.
Generally speaking, the field of knowledge where the uselessness of things is taken seriously is the arts. Art objects tend to be highly specialized, for they can only be used for one thing: their aesthetic appreciation. Therefore, tinkering machines to make art has as an outcome objects that can legitimately be approached through such a perspective. That’s why machines should make art.
Final considerations
There are two things that I have learned and want to share from my experience in researching and designing art-making machines. The first is a bit of a reminder of an often denied fact: technologies are not neutral. The second is a conclusion as an onlooker of this kind of works: art-making machines help us experience what it means to give authority to something we really don’t understand.
Technologies are not neutral, at least not in the real world.
As for the first point, technologies are not neutral, at least not in the real world. Take a car for example. We could think of it as a neutral artifact insofar as every car is designed to transport any kind of person: thin and fat, big and small, black and white. Yet while cars are indifferent to the characteristics of their passengers, they are still designed and branded for the needs of specific consumers, like families or couples, people living in cities or the countryside, for the wealthy or for the aspiring middle class.
Now think of Facebook. The technology behind this platform allows individuals to communicate and share media with other users. But is it only designed for that or also to attract advertisers? and if so, what are the ethical implications of that? Could we then say that Facebook is neutral?
From my experience designing Jan Bot, the research and algorithmic filmmaking software of which this publication is part, there are many secondary goals that are necessary to define in order to put a machine to work. For example, Jan Bot collects trending news from the Internet as a source of inspiration to generate short films. So even before thinking of how to instruct the machine to make films, it becomes important to establish what counts as trending news, and to what extent it is possible to force such a definition to make sure the program will have enough variety of content to play around. All these questions come with aesthetic and ethical dimensions that — if we are taking our machine seriously — need to be addressed.
A machine that is not designed to serve humans can easily become a mystery for the mind. Contemplating its inner workings may give us hints as to how it thinks and proceeds when left unsupervised. Or it may not.
The second and last point that I want to share comes from my perspective as a spectator of art-making machines: contemplating their uselessness invites us to safely experience what it feels like to give authority to something we really don’t understand. This can be insightful. Taking the time to appreciate machines making art can put us in an uncomfortable position, as we may realize that we are not always able to grasp their inner logics. A machine that is not designed to serve humans can easily become a mystery to the mind. Contemplating its inner workings may give us hints as to how it thinks and proceeds when left unsupervised. Or it may not.
If that is the case, what kind of authority over our free will do we want to give to a technological artifact like that? Such a question, and not utopian or dystopian scenarios, should be the starting point in imagining what the future yields for us and our machines.