The Robots Have Eyes, Vitra Design Museum Exhibit Warns

Today’s robots come in unexpected forms, says curator Amelie Klein. Designers must prepare themselves for the ramifications.

The first section of the exhibition, “Science and Fiction,” examines how we have imagined robots through the ages. These tin toys were manufactured in Japan between 1956 and 1980.

Courtesy Andreas Sütterlin, 2016


The appearance of the robot in our everyday lives is unavoidable—its visible appearance, that is, for in fact robots have been lurking in parts of washing machines, automobiles, and automatic cash dispensers for decades. Of course, such creatures will not take the form that most of us have come to expect. Karel Čapek’s 1920 play R.U.R., which popularized the term, described a mechanical working class—in other words, a class that has been dehumanized and hence robbed of its dignity—that rises up against its masters before revealing itself to be the morally and ethically superior species. Čapek, a staunch anti-fascist, was engaging in a piece of social criticism that has been expressed time and again: the robot that serves us—and the robot that destroys us.

Thus, popular culture has influenced our expectations regarding robots for almost a hundred years. They should be humanoid in form—i.e., look like us—and they should think, communicate, and move as we do. Our fascination with these machines has reached the world’s robotics laboratories, where researchers are eagerly working on creating humanoid robots.

What we often forget, however, is that robots don’t actually need their own enclosed bodies. They need only three things, says Carlo Ratti, director of MIT’s Senseable City Lab: sensors, intelligence, and actuators. In other words, they require measuring instruments; software that is capable of making sense of and using the information these instruments gather; and devices that trigger a measurable physical reaction, through light, sound, or heat. Viewed in this way, it means any house and any environment can be a robot. A robot can observe us via numerous cameras simultaneously and, for example, regulate a city’s traffic lights or adjust the lights in our living room according to what it sees. We could also describe the smartphone as a kind of mini-robot—and we could say that paired with us, it forms a (partially) robotic system.

Photographer Yves Gellie documented MIT Media Lab’s Nexi robot for his 2009 exhibition Human Version 2.0

Courtesy Yves Gellie, Galerie du jour agnès b, Galerie Baudoin Lebon


Ratti’s definition of a robot is certainly very broad, and it seems to leave out certain things that we think of as typical characteristics of robots. For example, they are supposed to teach and steer themselves, they should make autonomous decisions, and they should be at least partially physical in nature. But this is not true of every robot. Classical industrial robots can perform only the movements they have been programmed to perform; they do not make decisions on their own, nor do they learn. Surgical robots are remote-controlled—mercifully—and the same is true of most drones. And the internet is teeming with softbots, self-teaching software that can chat with users or provide shopping tips but has no physical form. It appears that there is no universally acceptable definition of robots. Only one thing seems to be clear: Yes, two-legged humanoid robots such as Boston Dynamics’ Atlas, which over 19 million viewers have watched stumble through the snow on YouTube, do indeed exist. But robots are much more than that. They make our physical world intelligent. They transform objects into “smart objects.” They can give rise to a scenario in which all of the things we know from the internet can step out of the screen and permeate three-dimensional space.

We tend to think that such technology generally will act in our best interests. What we are seeing, however, is a kind of well-intentioned paternalism. David Rose, researcher at the MIT Media Lab, entrepreneur, and expert in human-computer interactions, has developed a smart screw cap for pill bottles that has enjoyed brisk sales for several years. This intelligent device, named GlowCap, reminds users to take their medicine.
If they neglect to do so, the screw cap starts to blink—by all means a sensible reminder, for it is certainly important that patients take their medicines according to schedule. In 2010, it won the Medical Design Excellence Award in the United States. But GlowCap goes one step further: If patients fail to take their medicine after the reminder, the smart cap sends a message to their loved ones. And another one to the doctor. And another to the health insurance company, since insurers are the main distributors of GlowCap.

The boundaries between well-intentioned concern, surveillance, and outright espionage are blurry. Even if big data has yet to evolve into smart data—that is, if the data collectors have not yet learned to properly classify all the information they gather—it would still be naive to believe that a health insurance company would allow a patient who neglects to take his medication to go unpunished.

The internet unremittingly collects data about our behavior. And with robotics, the arrival of the internet in three-dimensional space, this is set to grow exponentially. The Internet of Things and the smart city are projects for major corporations—not only those that make these infrastructures available, but also those that are keen to evaluate the data we generate or sell it to third parties like the advertising industry. “An Internet of Things,” writes science fiction author Bruce Sterling, “is not a consumer society. It’s a materialized network society. It’s like a Google or Facebook writ large in the landscape. Google and Facebook don’t have ‘users’ or ‘customers.’ Instead, they have par-ticipants under machine surveillance, whose activities are algorithmically combined within big data silos.”

Thomas Vašek, editor in chief of the philosophy magazine Hohe Luft, introduces machines with this observation: “All of us—humans as well as robots, smartphones, and artificial intelligences of every kind—are slaves of digital capitalism. We all produce data that is econom-ically exploitable for Google and the like, we all leave data trails in the infosphere, and we are all digitally predictable—and therefore we can be easily controlled by a digital mega-superintelligence. We call it the capitalist system.” Before the filthy lucre we are all the same.

Unfortunately, design is all too willing to serve the will of this superintelligence. But this need not be the case. Indeed, it shouldn’t be. “Rather than thinking outside the box—which was almost always a money box, quite frankly—we surely need a better understanding of boxes,” Sterling writes. In other words, we have to change the parameters and redefine the context. “Instead of pursuing projects, defining goals, and thus describing a linear path to a solution, design is capable of drawing upon prototypes, experiments and mistakes, pilot projects, and speculation based on limited knowledge to sketch several paths that can describe the space for possibilities,” writes the German graphic designer and university lecturer Florian Pfeffer. Will it be enough if we determine the parameters that can describe the scope of these smart devices and decide where humans should take over again? Hardly. In this respect we are only now beginning to ask the right questions.

Hello Robot. Design Between Human and Machine is on view at the Vitra Design Museum in Weil am Rhein, Germany, from February 11 to May 14, 2017.

The section titled “Programmed for Work” shows how robots have entered the factory. Dutch designer Joris Laarman aims to 3D-print his steel MX3D Bridge using six-axis industrial robots.

© Joris Laarman Lab

In the “Friend and Helper” section, the exhibition takes a critical look at robotic caregivers. The speculative project Raising Robotic Natives by Stephan Bogner, Philipp Schmitt, and Jonas Voigt imagines children who grow up with robots.

© Jonas Voigt

In the section called “Becoming One,” Philip Beesley’s Hylozoic Soil project aims to blur the lines between nature and technology by creating architectural elements with sensors that react to their environment or to passersby.

Courtesy Philip Beesley Architect Inc.

Anouk Wipprecht’s 3D printed robotic Spider Dress 2.0 from 2015, with Intel Edison microcontrollers.

© Anouk Wipprecht, Photo: Jason Perry

TRNDlabs’ SKEYE Nano 2 FPV Drone, released in 2015

© TRNDlabs, 2016

Recent Viewpoints