Categories
Artificial intelligence

US civil rights enforcers warn employers against biased AI

US civil rights enforcers warn employers against biased AI

Assistant Attorney General for Civil Rights Kristen Clarke speaks at a news conference at the Department of Justice in Washington, Thursday, Aug. 5, 2021. The federal government said Thursday, May 12, 2022, that artificial intelligence technology to screen new job candidates or monitor their productivity at work can unfairly discriminate against people with disabilities, sending a warning to employers that the commonly used hiring tools could violate civil rights laws. The US Justice Department and the Equal Employment Opportunity Commission jointly issued guidance to employers to take care before using popular algorithmic tools meant to streamline the work of evaluating employees and job prospects but which could violate the Americans with Disabilities Act. Credit: AP Photo/Andrew Harnik, File

The federal government said Thursday that artificial intelligence technology to screen new job candidates or monitor worker productivity can unfairly discriminate against people with disabilities, sending a warning to employers that the commonly used hiring tools could violate civil rights laws.

The US Justice Department and the Equal Employment Opportunity Commission jointly issued guidance to employers to take care before using popular algorithmic tools meant to streamline the work of evaluating employees and job prospects—but which could also potentially violate the Americans with Disabilities Act.

“We are sounding an alarm regarding the dangers tied to blind reliance on AI and other technologies that we are seeing increasingly used by employers,” Assistant Attorney General Kristen Clarke of the department’s Civil Rights Division told reporters Thursday. “The use of AI is compounding the longstanding discrimination that jobseekers with disabilities face.”

Among the examples given of popular work-related AI tools were resume scanners, employee monitoring software that ranks workers based on keystrokes, and video interviewing software that measures a person’s speech patterns or facial expressions. Such technology could potentially screen out people with speech impediments or range of other disabilities.

The move reflects a broader push by President Joe Biden’s administration to foster positive advancements in AI technology while reining in opaque and potentially harmful AI tools that are being used to make important decisions about people’s livelihoods.

“We totally recognize that there’s enormous potential to streamline things,” said Charlotte Burrows, chair of the EEOC, which is responsible for enforcing laws against workplace discrimination. “But we cannot let these tools become a high-tech path to discrimination.”

A scholar who has researched bias in AI hiring tools said holding employers accountable for the tools they use is a “great first step,” but added that more work is needed to rein in the vendors that make these tools. Doing so would likely be a job for another agency, such as the Federal Trade Commission, said Ifeoma Ajunwa, a University of North Carolina law professor and founding director of the AI ​​Decision-Making Research Program.

“There is now a recognition of how these tools, which are usually deployed as an anti-bias intervention, might actually result in more bias—while also obfuscating it,” Ajunwa said.


NYC aims to be first to rein in AI hiring tools


© 2022 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.

citation: US civil rights enforcers warn employers against biased AI (2022, May 12) retrieved 12 May 2022 from https://techxplore.com/news/2022-05-civil-rights-employers-biased-ai.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Leave a Reply

Your email address will not be published. Required fields are marked *