When the makers of Apple’s Siri unveiled Viv at TechCrunch Disrupt NYC last month, the crowd—and press—swooned. Pitched as “the intelligent interface for everything,” Viv is a personal digital assistant armed with a nearly transcendent level of sophistication. She is designed to move seamlessly across services, and be able to fulfill complex tasks such as “Find me a place to take my peanut-free uncle if it rains tomorrow in Cleveland.” Viv is also just the latest virtual helpmeet with a feminine voice and female name. In addition to Siri (Norse for “beautiful woman who leads you to victory”), her sorority sisters include Amazon’s Alexa and Microsoft’s Cortana (named after a voluptuous character in the video game “Halo,” who wears a “holographic body stocking”).
Why are digital assistants overwhelmingly female? Some say that people prefer women’s voices, while others note that in our culture, secretaries and administrative assistants are still usually women. Regardless, this much is certain: Consistently representing digital assistants as female matters a lot in real life: it hard-codes a connection between a woman’s voice and subservience.
As social scientists explore the question of why women lag so far behind men in workplace leadership, there’s increasing evidence that unconscious bias plays an important role. According to Erika Hall, a professor at Emory University’s Goizueta Business School, unconscious bias has its origins in the “cultural knowledge” we absorb from the world around us. This knowledge can come from movies and television, from teachers and family members; we acquire it almost osmotically by living in our society. Unconscious bias happens when we then engage in discriminatory behaviors because we unwittingly use this knowledge to guide our actions.
And this knowledge is everywhere: Our society largely depicts women as supporters and assistants rather than leaders and protagonists. A recent study found that women accounted for only 22 percent of protagonists in the top-grossing films of 2015 (and only 13 percent of protagonists in films directed by men). A comprehensive review of video game studies found that female characters are predominately supporting characters, often “assistants to the leading male character.” And a study of prime-time television found that women comprise the majority of aides and administrative support characters. These create “descriptive stereotypes” about what women are like—that women are somehow innately more “supporter-like” than “leader-like.”
Because Viv and her fellow digital assistants are female, their usage adds to the store of cultural knowledge about who women are and what women do. Every time you say, “Viv, order me a turkey club” or “Viv, get me an Uber,” the association between “woman” and “assistant” is strengthened. According to Calvin Lai, a Harvard University post-doc who studies unconscious bias, the associations we harbor depend on the number of times we are exposed to them. As these A.I. assistants improve and become more popular, the number of times we’re exposed to the association between “woman” and “assistant” increases.
The real-world consequences of these stereotypes is well-documented: Research has shown that people tend to prefer women as supporters and men as leaders. A study of engineering undergraduates at the University of Michigan found that when students presented work, the men tended to present the material and the women tended to play the role of “supporter of the male expert.” In another study, when people were shown identical resumes with either male or female names for a lab manager position, they rated the male candidate significantly more competent and hirable. A third study found that saleswomen earned less than salesmen in part because they’d been denied support staff—why would a supporter need a supporter, after all?
While “descriptive stereotypes” lead to women not being perceived as suitable for leadership positions, stereotypes can be prescriptive, too: Women are expected to conform to the stereotype of being a supporter or helper, and rejected or punished for failing to do so. Linguist Kieran Snyder’s study of performance reviews in tech companies showed that women are routinely criticized for having personality traits that don’t conform to feminine stereotypes. Women, but not men, were consistently docked for being “abrasive” and not “letting others shine.” In other words, they were punished for not being good helpers and supporters.
In a study by New York University psychologist Madeline Heinemann, a woman who stayed late to help a colleague was rated less favorably than a man who stayed to help—but penalized more when she declined to stay to help. Indeed, because women are expected to be helpers, they don’t actually accrue any reward for doing it—they’re simply living up to the expectation. But if they decline to help, they are seen as selfish. Women are aware of this expectation, too: In a study of medical residents, a female medical resident reported that when leading others, “The most important thing is that when I ask for things they should not sound like orders.”
Ultimately, the more our culture teaches us to associate women with assistants, the more real women will be seen as assistants, and penalized for not being assistant-like. At this moment in culture, when more and more attention is being paid to women’s roles in the workplace, it’s essential to pay attention to our cultural inputs, too. Let’s eschew the false choice between male and female voices. If these A.I. assistants are meant to lead us into the future, why not transcend gender entirely— perhaps a voice could be ambiguously gendered, or shift between genders? At the very least, the default settings for these assistants should not always be women. Change Viv to Victor, and maybe one fewer woman will be asked to be the next meeting’s designated note-taker.