|||DoppenGänger by ForReal Team explores human-robot interaction through physical movement.|For the Google's Eyes project

Artificial Intelligence Won’t Save Us From Being Stupid

The curator of the exhibition “Humans Need Not Apply” warns that the biggest danger in big data and artificial intelligence might be the data we don’t have.

What do the following data sets about the United States have in common?

• Civilians killed in encounters with police or law enforcement agencies
• Sale prices in the art world (and relationships between artists and gallerists)
• People excluded from public housing because of criminal records
• Trans people killed or injured in instances of hate crimes
• Poverty and employment statistics that include incarcerated people
• Muslim mosques and communities 
surveilled by the FBI and CIA
• Undocumented immigrants currently 
incarcerated or illegally underpaid

The answer is: they are all missing. These data may have never been collected at all or perhaps they were hidden, misplaced or destroyed. We don’t know. Given the many topics of discourse these data sets could influence, or the value they might add to efforts to achieve greater social justice, it’s worthy and even urgent to question their state of absence. 
Brooklyn-based artist Mimi Onuoha is doing just that. She recently urged a gathering of engineers and guests at a Google conference on machine learning, with no small amount 
of bravery, to “identify the intentionality behind” sets of missing data. She argued, mercilessly and convincingly, that relying only on available data is a kind of irresponsible compromise, while bein

g with people often reveals crucial, missing details. 
Data, in other words, are never impartial. They exist in a context of the presence or absence of other available data that in total speaks to our personal and societal glitches — like our tendency to look for examples
 that reinforce biases, or dysfunctional power dynamics, where collecting information about disenfranchised populations does not serve the interests of those deciding what research to fund. Crime statistics in the United States, for instance, are one of the most detailed and reported data types. Communities demand evidence that they’re kept safe; yet, there are still no national statistics on the number of civilians killed in encounters with police. 
It would seem some communities have more right to accountability than others.

When it comes to artificial intelligence, engineers necessarily rely heavily on available data. These are training sets, or reference libraries, a machine-learning system utilizes to become useful. Sometimes these learning systems are then embedded within other systems, potentially amplifying the effect of the incompleteness of the data they ingested, like a rounding error finding exponential expression. In one example, Nikon’s camera software misread images of Asian people as blinking; in another, software used to assess the risk of convicted criminals reoffending was twice as likely to mistakenly flag black defendants as being at a higher risk of committing future crimes. Gender disparity also appears: computer scientists at Carnegie Mellon recently found that women were less likely than men to be shown ads on Google for highly paid jobs.

The worry is that missing data and its effects are, to borrow a phrase from the
 tech world, “a feature and not a bug” of the technology; that they are aligned with an intention or agenda. Technology can only reflect the priorities, behaviors, and biases 
of its creators. It must, therefore, be embraced with caution and gives us pause to consider how our social progress consistently lags behind our technological prowess. Similarly, the types of problems that new technologies
 or services address tend to be geared towards solving the problems of small and influential groups. Consider, for example, how much recent technology appears to be designed
 with the intent of enabling socializing (if
 you can call it that) without the potentially uncomfortable experience of eye contact. 
You might guess that many of our tech visionaries are motivated by severe social anxiety. Another way to look at the narrowness of tech-driven problem solving comes from architecture, a field that has rapidly adopted computer modeling tools, like parametric design. From Christopher Alexander: “The effort to state a problem in such a way that a computer can be used to solve it
 will distort your view of the problem. It will allow you to consider only those aspects of the problem which can be encoded — and in many cases these are the most trivial and the least relevant aspects.”

When it comes to automation, the problem that most artificial intelligence is geared to solve is the high cost of employees. This focus is blind to the human costs or the community impacts of putting people out of work, or of pushing them into insecure, freelance, or part-time arrangements. These are very real costs to which governments must respond. In the past, as agricultural work was replaced by factory and service jobs in the Industrial Revolution, the government built schools and made primary education mandatory while beginning to subsidize higher education. Workers simultaneously built a labor movement and formed unions. But these models of support and power-sharing have proven insufficient in the 21st century.
 New, more nimble systems are needed to address the scale and speed of current changes propelled by machine learning. Lifelong education initiatives can be of help, for example, in which people are funded to retool or relocate with new skills every few years, instead of relying on a single university experience; another reform could involve realigning incentives, so that universities receive no tuition unless graduates earn well in the future, a percentage of which is paid
 to the school.

Broad protections for freelance workers are also overdue, in which companies might finally be obliged to contribute to the many costs, such as pensions, health care, and sick time, which those workers now bear alone. The emergent and so-called ‘gig economy’ demands 2.0 versions of unions, regulations, corporate taxes, and education. Whether most people will prosper in this new machine age will largely depend on how effectively we pursue their development.

Finally, artificial intelligence must be recognized for its power to exploit our mental and social vulnerabilities, particularly when used to select content we see on opinion-shaping platforms like Twitter and Facebook. Neural networks are mastering how to zero in on what content is most likely to get you more engaged, which means spending more time online — sharing, liking, posting, clicking more ads. This process is largely blind to the quality of the content, and so it often favors inflammatory posts, which measurably create more engagement but often carry with them negativity, stigma, or blatant falsities.

An angry customer, it turns out, keeps coming back for more. Seasoned editors of newspapers, cable news, and radio programs have long known this, but they were always somewhat reined in by journalistic standards, maintaining reputation, or avoiding lawsuits. Algorithms know no such boundaries, and they work at speeds and on scales that exponentially strengthen the impact of, say, 
a fake news story about Brexit, Hillary Clinton, or climate change; stories that can be seen by millions, in a matter of minutes, with content mutating slightly with every share, to become even more enraging, and so, engaging.

The speed, openness, and reach of the internet, when combined with social media and machine learning, is clearly producing negative impacts along with all the benefits. Just as the automobile granted freedom of movement on breakthrough scales at the turn of the 20th century, it also started to create pollution and cause road deaths. Eventually, we designed seat belts and introduced emissions standards on car engines. So, too, we may need equivalent inventions for the digital world, being cautious not to censor speech, but to prevent the car wrecks and smog we face in the form of the widespread loss of our grip on facts. When we wield artificial intelligence, we ignore our natural stupidity at our peril.

William Myers is the curator of the exhibition Humans Need Not Apply which is on view at the Science Gallery Dublin through May 21, 2017. This essay is excerpted from the catalogue for the show.

  • |||DoppenGänger by ForReal Team explores human-robot interaction through physical movement.|For the Google's Eyes project
  • an android image recognition app by Google. The app selected twenty objects it recognized as similar—such as a human jawbone—one of which was then shown to the app again to repeat the process.|The Hoopla project by Gillian Smith consists of an AI program that generates embroidery sampler patterns
  • which are then stitched by hand.|Lady Chatterley's Tinderbot is an interactive installation by Libby Heaney in which Tindr users converse with an artificially intelligent bot posing as characters from the novel Lady Chatterly's Lover.|Visitors at the exhibition "Humans Need Not Apply" at the Science Gallery Dublin|Accompanying the fall 2016 launch of Cozmo
  • or open development kit
  • so that new features or behaviours can be created for the pet robot.|Stony 1.0
  • by Itamar Shimshony
  • is a robot that takes care of tombstones as Jewish custom requires.

Recent Viewpoints