Professional headshots generated by artificial intelligence are in vogue with young workers. But some women of color say the technology has misfired for them, lightening skin tones, botching hairstyles and changing facial shapes. 

Wang said her results were a surprise because she asked the program to “give the girl from the original photo a professional LinkedIn profile photo.”

“The overall effect definitely just made me look like a white person,” said Wang, who is Asian-American. 

Suhail Doshi, the founder of Playground AI, said the service isn’t intended to be used for professional headshots: “If people are using us for that, they are going to be disappointed.”

Body and facial distortions

As résumés and professional social-media profiles start to look more like Instagram, many workers are experimenting to create the perfect headshot. Booking a photo shoot with a professional photographer and ensuring hair and makeup is top-tier can easily top $1,000. 

For $12 to $50, people can upload their own photos, including selfies, into various AI platforms. These AI models then create images resembling the user in a professional context based on the uploaded photos, as well as millions of other images paired with text descriptions that they are trained on, according to AI founders.

The catch is that many Black, Latina and Asian women say AI photo misfires are going beyond well-documented technology fumbles like giving a person too many teeth or more than five fingers on a hand. 

Danielle DeRuiter-Williams, a 38-year-old diversity, equity and inclusion specialist based in New Orleans, used a service called AI SuitUp. She said she was surprised that the AI-generated photos narrowed her nose and lightened her skin. 

“It was just more comical than anything,” she said. “And just a demonstration to me of how in the nascent stages a lot of this technology is, especially when it comes to its adaptability to nonwhite people.”

Racial and gender bias is baked into the base technology that many AI platforms use, said Simon Restelli, the founder of AI SuitUp. For instance, many online photos of doctors that a platform like AI SuitUp would train on tend to be of men. Without more refining, those photos will skew the results of AI-generated photos. More attention needs to go into the data and photos that are fed into the models that companies like Restelli’s build on, he said, adding that it’s already happening as consumers demand a better product.

“I believe the community will be paying more attention,” he said. 

Botched Black hairstyles   

“It just really seems that Black people were just not considered when this technology was rolled out,” she said. 

Marko Jak, the founder of Secta Labs, said his company is working hard to rectify the AI snafus. Secta Labs recently rolled out a new version of its technology that allows users to identify their race when they create an account. That way, he said, the AI can draw from more narrow, ethnicity-specific base models to generate people’s headshots.

“You can’t say, ‘My customer is a professional, but not with this skin color,’ right? That’s just screwed up,” he said.

Jak regularly monitors social-media posts that customers make about their results. When a customer has a poor experience, he often reaches out to glean more feedback from them and offer a refund or new headshots for free, Jak and several of his customers said. 

Sophia Carter, a Raleigh, N.C.-based workforce strategist for
FedEx,

said she recently had a positive experience with Secta Labs. Though the AI didn’t properly generate Carter’s sisterlocks, a hairstyle consisting of hundreds of tiny dreadlocks, she said she was pleased with the variety of other Black hairstyles in her results, including cornrows, Afros and twist-outs. She got at least 100 “usable, amazing headshots,” Carter said. 

Harris used her results to illustrate her experience to others online, and none made the cut as a profile photo. She now plans to invest hundreds of dollars in professional headshots by a photographer instead of relying on AI tools.

The underlying problem

AI image generators often use foundational base models made up of “a soup of data” from the internet that reflect and perpetuate historical racial and gender bias in people, said Liz O’Sullivan, chief executive of Vera, a company that is developing software to help AI firms control the behavior of the models they want to use. Simply reflecting what’s on the internet can create some of the wacky results that people of color are getting, she said.

“If you’re not taking the time and care—and spending the money—to curate your training data and figure out whether those correlations are present, then you’re just getting what you pour into it. And that is how things go wrong,” O’Sullivan said.

While it’s impossible to eradicate human bias, O’Sullivan said one way companies can fight the bias seeping into their product is by regularly and manually updating the data these platforms train on. 

One open-source base model that many AI image generation services use is owned by Stability AI. Several AI service founders, including Jak and Restelli, say Stability AI’s model appears to use data sets that perpetuate racial and other biases. 

The company said it’s working to solve these biases. The latest version of its model, released July 26, should improve the accuracy of images for people of all racial and ethnic backgrounds. The new model, called SDXL 1.0, generates images with more vibrant and precise colors, lighting and shadows, the company said. 

“All AI models have inherent biases that are representative of the data sets they are trained on,” a company spokesman said in a written statement. “We intend to train open-source models on data sets specific to different countries and cultures, which will serve to mitigate biases caused by overrepresentation in general data sets.”

Image credits from top: Rona Wang, Playground AI, Nathalie Walton, Christine Coakley Photography, Secta Labs, Nicole Harris, Secta Labs

Write to Katie Mogg at katie.mogg@wsj.com