Earlier this month, OpenAI’s board abruptly fired its popular CEO, Sam Altman. The ouster shocked the tech world and rankled Altman’s loyal employees, the vast majority of whom threatened to quit unless their boss was reinstated. After a chaotic five-day exile, Altman got his old job back—with a reconfigured, all-male board overseeing him, led by ex-Salesforce CEO and former Twitter board chair Bret Taylor.
Right now, only three people sit on this provisional OpenAI board. (More are expected to join.) Immediately prior to the failed coup, there were six. Altman and OpenAI cofounders Greg Brockman and Ilya Sutskever sat alongside Quora CEO Adam D’Angelo; AI safety researcher Helen Toner; and Tasha McCauley, a robotics engineer who leads a 3D-mapping startup.
The specifics of the boardroom overthrow attempt remain a mystery. Of those six, D’Angelo is the only one left standing. In addition to Taylor, the other new board member is former US Treasury secretary Larry Summers, a living emblem of American capitalism who notoriously said in 2005 that innate differences in the sexes may explain why fewer women succeed in STEM careers (he later apologized).
While Altman, Brockman, and Sutskever all still work at OpenAI despite their absence from the board, Toner and McCauley—the two women who sat on the board—are now cut off from the company. As the artificial intelligence startup moves forward, the stark gender imbalance of its revamped board illustrates the precarious position of women in AI.
“What this underscores is that there aren’t enough women in the mix to begin with,” says Margaret O’Mara, a University of Washington history professor and author of The Code: Silicon Valley and the Remaking of America. For O’Mara, the new board reflects Silicon Valley’s power structure, signaling that it’s “back to business” for the world’s most influential AI company—if back to business means a return to the Big Tech boys’ club. (Worth noting that when it was founded in 2015, OpenAI only had two board members: Altman and Elon Musk.)
Prominent AI researcher Timnit Gebru, who was fired by Google in late 2020 over a dispute about a research paper involving critical analysis of large language models, has been floated in the media as a potential board candidate. She is, indeed, a leader in responsible AI; post-Google, she founded the Distributed AI Research Institute, which describes itself as a space where “AI is not inevitable, its harms are preventable.” If OpenAI wanted to signal that it is still committed to AI safety, Gebru would be a savvy choice. Also an impossible one: She does not want a seat on the board of directors.
“It’s repulsive to me,” says Gebru. “I honestly think there’s more of a chance that I would go back to Google—I mean, they won’t have me and I won’t have them—than me going to OpenAI.”
The lack of women in the AI field has been an issue for years; in 2018, WIRED estimated that only 12 percent of leading machine learning researchers were women. In 2020, the World Economic Forum found that only 26 percent of data and AI positions in the workforce are held by women. “AI is very imbalanced in terms of gender,” says Sasha Luccioni, an AI ethics researcher at HuggingFace. “It’s not a very welcoming field for women.”
One of the areas where women are flourishing within the AI industry is in the world of ethics and safety, which Luccioni views as comparatively inclusive. She also sees it as significant that the ousted board members reportedly clashed with Altman over OpenAI’s mission. According to The New York Times, Toner and Altman had bickered over a research paper she published with coauthors in October that Altman interpreted as critical of the company. Luccioni believes that in addition to highlighting gender disparities, this incident also demonstrates how voices advocating for ethical considerations are getting hushed.
“I don’t think they got fired because they’re women,” Luccioni says. “I think they got fired because they highlighted an issue.” (Technically, both women agreed to leave the board.)
No matter what actually spurred the conflict at OpenAI, the way in which it was resolved, with Altman back at the helm and his dissenters out, has played into a narrative: Altman emerging as victor, flanked by loyalists and boosters. His board is now stocked with men eager to commercialize OpenAI’s products, not rein in its technological ambition. (One recent headline capturing this perspective: “AI Belongs to the Capitalists Now.”) Caution espoused by female leadership at least appears to have lost.
O’Mara sees the all-male OpenAI board as a sign of a swinging cultural pendulum. Just as some Silicon Valley tech companies have been working to correct their woeful track records in diversity and consider their environmental footprints, others have recoiled against “wokism” in various forms, instead espousing hard-nosed beliefs about work culture.
“It’s this sentiment around, ‘OK, we’re done being touchy-feely,’” she says. “Whether it’s Elon Musk’s ‘extremely hardcore’ demands or Marc Andreessen’s recent manifesto, the idea is that if you’re calling for people to take a pause and consider potential harms or complaining about the lack of representation, that is orthogonal to their business.”
OpenAI is reportedly planning to expand the board soon, and speculation is rampant about who will join. Its conspicuously all-male and all-white makeup certainly did not go unnoticed, and OpenAI is already looking at prospects who might placate some critics. According to a Bloomberg report, philanthropist Laurene Powell Jobs, former Yahoo CEO Marissa Mayer, and former US Secretary of State Condoleezza Rice were all considered but not selected.
At the time of publication, OpenAI had not responded to repeated requests for comment.
For many onlookers, it’s crucial to choose someone who will advocate balancing ambition with safety and responsibility—someone whose line of inquiry might match that of Toner, for example, rather than someone who simply looks like her. “The sort of people that this board should be bringing back are people who are thinking about responsible or trustworthy technology, and safety,” says Kay Firth-Butterfield, executive director of the Centre for Trustworthy Technology. “There are a lot of women out there who are experts in that particular field.”
As OpenAI searches for new board members, it may meet resistance from prospects wary of the real power dynamics within the company. There are already concerns about tokenization. “I just feel like the person on the board would have a horrible time because they will constantly be fighting an uphill battle,” says Gebru. “Used as a token and not to really make any kind of difference.”
She’s not the only person within the world of AI ethics to question whether new board members would be marginalized. “I wouldn’t touch that board with a ten-foot pole,” Luccioni says. She feels she couldn’t recommend a friend take that sort of position, either. “Such stress!”
Meredith Whittaker, president of messaging app Signal, sees value in bringing someone to the board who isn’t just another startup founder, but she doubts that adding a single woman or person of color will set them up to affect meaningful change. Unless the expanded board is able to genuinely challenge Altman and his allies, packing it with people who tick off demographic boxes to satisfy calls for diversity could amount to little more than “diversity theater.”
“We’re not going to solve the issue—that AI is in the hands of concentrated capital at present—by simply hiring more diverse people to fulfill the incentives of concentrated capital,” Whittaker says. “I worry about a discourse that focuses on diversity and then sets folks up in rooms with [expletive] Larry Summers without much power.”