A Startup Will Nix Algorithms Built on Ill-Gotten Facial Data

Late last year, San Francisco face-recognition startup Everalbum won a $2 million contract with the Air Force to provide “AI-driven access control.” Monday, another arm of the US government dealt the company a setback.

The Federal Trade Commission said Everalbum had agreed to settle charges that it had applied face-recognition technology to images uploaded to a photo app without users’ permission and retained them after telling users they would be deleted. The startup used millions of the photos to develop technology offered to government agencies and other customers under the brand Paravision.

Paravision, as the company is now known, agreed to delete the data collected inappropriately. But it also agreed to a more novel remedy: purging any algorithms developed with those photos.

The settlement throws a shadow on Paravision’s reputation, but chief product officer Joey Pritikin says the company can still fulfill its Air Force contract and obligations to other clients. The startup shut the consumer app in August, the same month it learned of a potential FTC complaint, and it launched face-recognition technology developed without data from the app in September. Pritikin says those changes were in motion before the FTC came knocking, in part due to “evolution in public sentiment” about face recognition.

FTC commissioner Rohit Chopra, a Democrat, released a statement Monday praising the commission’s thoroughness with Paravision, saying it had been rightly forced to “forfeit the fruits of its deception.”

He contrasted the settlement with a 2019 agreement in which Google paid $170 million for illegally collecting data from children without parental consent. The company was not required to delete anything derived from that data. “Commissioners have previously voted to allow data protection law violators to retain algorithms and technologies that derive much of their value from ill-gotten data,” he wrote. “This is an important course correction.”

article image

The WIRED Guide to Artificial Intelligence

Supersmart algorithms won’t take all the jobs, But they are learning faster than ever, doing everything from medical diagnostics to serving up ads.

Ryan Calo, a law professor at the University of Washington, says requiring Paravision to delete face-recognition algorithms trained with allegedly ill-gotten images shows the FTC recognizing how the rise of machine learning has tightly entwined data sets and potentially harmful software products.

Tech companies once created software solely by paying people to tap the right keys in the right order. But for many products such as face-recognition models or video filtering software, one of the most crucial ingredients is now a carefully curated collection of example data to feed into machine-learning algorithms. “This idea you have to delete the model and the data is acknowledgment those things are closely linked,” Calo says. Face-recognition systems deserve special scrutiny because creating them requires highly personal images. “They’re like Soylent Green—made out of people.”

David Vladeck, a former director of the FTC’s Bureau of Consumer Protection and a law professor at Georgetown, says Monday’s settlement is consistent with prior ones that required deletion of data. In 2013, software company DesignerWare and seven rent-to-own retailers agreed to delete geotracking data gathered without consent from spyware installed on laptops.

Monday’s more expansive deletion requirement with Paravision was approved unanimously, 5-0, by the FTC, which is still controlled by a Republican majority. After president-elect Joe Biden’s inauguration this month, the commission could become majority Democrat, and potentially even more eager to police tech companies. It could get new support and resources from the Democrat-controlled Congress.

Calo hopes to see the agency get more technical resources and expertise to help it scrutinize the tech industry on a more equal footing. One use for more tech know-how could be to devise ways to check whether a company really has scrubbed not just ill-gotten data but also advantages or tech derived from it. That could be difficult to do for systems involving complex machine-learning models built from multiple sources of data.

Source link

Related Articles

[td_block_social_counter facebook="tagdiv" twitter="tagdivofficial" youtube="tagdiv" style="style8 td-social-boxed td-social-font-icons" tdc_css="eyJhbGwiOnsibWFyZ2luLWJvdHRvbSI6IjM4IiwiZGlzcGxheSI6IiJ9LCJwb3J0cmFpdCI6eyJtYXJnaW4tYm90dG9tIjoiMzAiLCJkaXNwbGF5IjoiIn0sInBvcnRyYWl0X21heF93aWR0aCI6MTAxOCwicG9ydHJhaXRfbWluX3dpZHRoIjo3Njh9" custom_title="Stay Connected" block_template_id="td_block_template_8" f_header_font_family="712" f_header_font_transform="uppercase" f_header_font_weight="500" f_header_font_size="17" border_color="#dd3333"]

Latest Articles