The FTC is cracking down on a firm that claims its AI-powered face recognition tech has ‘zero’ bias
In This Story
A software provider won’t be allowed to lie about the accuracy of its artificial intelligence-powered facial recognition technology under a new proposed order from the Biden administration.
On Tuesday, the Federal Trade Commission (FTC) said that a proposed consent order with IntelliVision Technologies would prevent the San Jose, California-based company from making misleading claims about its software. That would include any misleading statements about its performance identifying people of different genders, ethnicities, and skin tones.
The San Jose, California-based firm would also be prevented from making any claims about its technology without “competent and reliable” testing to back it up.
Over the last six years, IntelliVision has claimed that its models have trained on millions of faces from across the world and have “zero gender or racial bias,” according to the agency’s complaint. A fact sheet on the company’s website states its technology has an accuracy “as high as +99%.”
If true, that would be a huge deal. For years now, facial recognition technology has had problems identifying women and non-white individuals.
In 2018, a study by Microsoft (MSFT-0.13%) researchers found that facial recognition software could be wrong as much as a third of the time when it was used to identify darker-skinned women, even as it achieved near-perfect results with light-skinned men. Those issues have largely remained a problem, and several companies have run into trouble as a result.
The National Institute of Standards and Technology, which is run by the Commerce Department, tested IntelliVision’s algorithms. The organization found that IntelliVision’s algorithms weren’t even among the top 100 tested as of December 2023, the FTC alleged in its complaint.
Instead of testing its technology on millions of faces, IntelliVision took a shortcut, according to the FTC, training on images of some 100,000 people and then creating variants of those images for further testing. The FTC also said that the company didn’t have evidence its anti-spoofing technology couldn’t be tricked by a photo.
“Companies shouldn’t be touting bias-free artificial intelligence systems unless they can back those claims up,” FTC Bureau of Consumer Protection Director Samuel Levine said in a statement. “Those who develop and use AI systems are not exempt from basic deceptive advertising principles.”