Hiring algorithms, artificial intelligence risk violating Americans with Disabilities Act, Biden admin says – Jahanagahi
Categories
Artificial intelligence

Hiring algorithms, artificial intelligence risk violating Americans with Disabilities Act, Biden admin says

The Biden administration announced Thursday that employers who use algorithms and artificial intelligence to make hiring decisions risk violating the Americans with Disabilities Act if applicants with disabilities are disadvantaged in the process.

The majority of American employers now use the automated hiring technology — tools such as resume scanners, chatbot interviewers, gamified personality tests, facial recognition and voice analysis.

The ADA is supposed to protect people with disabilities from employment discrimination, but just 19 percent of disabled Americans were employed in 2021, according to the Bureau of Labor Statistics.

Kristen Clarke, the assistant attorney general for civil rights at the Department of Justice, which made the announcement jointly with the Equal Employment Opportunity Commission, told NBC News there is “no doubt” that increased use of the technologies is “fueling some of the persistent discrimination.”

“We hope this sends a strong message to employers that we are prepared to stand up for people with disabilities who are locked out of the job market because of increased reliance on these bias-fueled technologies,” she said.

The Biden administration is concerned that the widely used technology can screen out people who have disabilities that do not affect their ability to do the job; gamified personality tests could select against even slight mental disabilities, while software that tracks speech and body language could discriminate against physical disabilities that may be invisible to the naked eye.

“This is essentially turbocharging the way in which employers can discriminate against people who may otherwise be fully qualified for the positions that they’re seeking,” Clarke said.

EEOC Chair Charlotte Burrows said the message to employers is, “if you’re buying a product to look at employment decision-making with AI, check under the hood.” The EEOC released a 14-page technical assistance document Thursday that emphasizes that bias need not be intentional to be illegal.

“We are not trying to stifle innovation here, but also want to make absolutely clear that the civil rights laws still apply,” said Burrows.

The joint announcement is the product of months of investigation into the impact of automated hiring tools. Amid mounting concern from Congress, the public and state and local lawmakers, the EEOC launched an initiative in October to ensure that the emerging hiring tools comply with civil rights laws.

A week ago, the EEOC filed its first algorithmic discrimination case — an age-discrimination suit naming several Asia-based companies operating in New York under the brand name iTutorGroup. All defendants are allegedly owned or controlled by Ping An Insurance, the biggest insurance group in China.

Prosecutors allege that the iTutorGroup enterprise — which hires thousands of US-based tutors each year to provide English-language tutoring services to students in China —programmed application software to automatically reject female applicants over the age of 55 and male applicants over the age of 60 .

No company affiliated with the iTutorGroup brand responded to requests for comment.

Discrimination charges are not public until the EEOC decides to prosecute them — and there were no complaints about hiring technologies on the agency’s radar prior to 2021. It can be challenging for workers to claim discrimination in the hiring process, because applicants usually do not know why they were rejected, and what role technology may have played.

But experts say there is no such thing as an unbiased hiring algorithm, in part because the technology is built to predict successful employees based on data about what worked for the company in the past.

“The algorithm doesn’t have a theory of the world, or a concept for disability,” said Amir Goldberg, a Stanford Graduate School of Business associate professor who teaches a class on human resources technologies. “It just learns and predicts based on the data. If the data has biases, it will only reproduce the biases.”

A new wave of state and local legislation is trying to put guardrails on the fast-moving technology. A New York City law will require annual bias audits, while Maryland and Illinois have prohibited the use of facial recognition in video interviews without employee consent.

Absent federal regulation, a private sector-led initiative called the Data & Trust Alliance has agreed to more than 200 experts, as well as major businesses and institutions across industries — such as American Express, Walmart, Meta, CVS, the NFL and Comcast (the parent company of NBC News) to develop safeguards against algorithmic bias in workforce decisions.

“Whenever you say, ‘We would like to find a person who has the following expertise background,’ you have a ‘bias’ towards certain people,” said Jon Iwata, founding executive director of the Data & Trust Alliance. “What we want to identify and mitigate is unfair bias.”

Companies and AI vendors see the technology as key to increasing diversity in the workforce over the long run — a way for hiring managers to engage a wider pool of applicants while trimming out human emotional biases.

But the proprietary algorithms are still largely sold by outside vendors who are not subject to uniform audits or regulation — leaving officials, employers and employees in the dark about exactly how metrics like “employability scores” are calculated.

“The risks of snake oil are significantly high at this stage,” Goldberg said.

Leave a Reply

Your email address will not be published. Required fields are marked *