【世界最大のHRテクノロジーカンファレンス HR Technology Conference & Expo提携記事】
Vol.9 バイアス根絶においてAIが直面する「反動」とは
In the Fight Against Bias, AI Faces Backlash

By: Andrew R. McIlvaine | July 23, 2018
■ブライアン・シャーマン氏による、本記事のサマリ
AIが進化するにつれ、我々は、意思決定を支援するためにAIをどれだけ頼りにするかについては慎重になる必要があります。
なぜなら、バイアスを根絶するためのAIの有効性は、アルゴリズムが作用するデータセットにより左右されるからです。
AIが人為的エラーをチェックするとして、AIは誰がチェックしていくのか?

HR and TA leaders should be on the lookout for adverse impact when using new technology.

Large technology companies tend not to be big fans of government regulation. Recently, however, the president of Microsoft challenged that notion by writing a lengthy blog post calling for greater government scrutiny of facial recognition technology.

“We live in a nation of laws, and the government needs to play an important role in regulating facial recognition technology,” Bradford L. Smith wrote. “A world with vigorous regulation of products that are generally useful but potentially troubling is better than world devoid of legal standards.”

Facial-recognition software is one of the many innovations enabled by artificial intelligence, and has been hailed for its potential to increase security and screen out criminals and terrorists. There’s growing concern, however, that the software—along with the AI in general—is highly susceptible to the biases of whoever programs it and could therefore unintentionally lead to greater discrimination in areas like recruiting and hiring.

A recent study led by an M.I.T. researcher, for example, found that facial recognition software from Microsoft and IBM was much more accurate in identifying white men than darker-skinned females. Several years ago, Google came under fire and was forced to apologize after it was discovered that its image-recognition photo app had labeled African Americans as “gorillas.”

These examples are especially disturbing because AI has been touted as a fairer way for companies to find talent—by using algorithms to identify people who are highly qualified for a certain job and whose social-media activity suggests they’d be open to a new opportunity, companies could avoid the pitfalls of biased recruiters and hiring managers who might balk at bringing on someone from a different background, race or gender.

Unfortunately, AI can be susceptible to what’s politely known as “algorithmic bias.”

“AI is only as good as the data it analyzes,” says Caitlin MacGregor, CEO of Plum, a company that’s developed hiring software designed to counteract human bias. “It’s garbage in, garbage out.”

She cites the example of a well-regarded AI solution that was designed to identify high performers via social media profiles. Yet when researchers “opened up the solution’s ‘black box,’ they discovered it was using criteria such as whether these people played lacrosse and tennis and read Harry Potter,” says MacGregor.

Plum uses a database of 24 trillion “human data points” to help identify candidates who are best-suited for a given role. Recruiters complete a six-minute survey created by industrial-organizational psychologists that’s designed to identify the core competencies for a given role. Job candidates then take a 25-minute assessment that’s designed to determine whether they possess those competencies.

“It’s using AI to replicate an expert system, rather than being a black box with low-quality data,” says MacGregor.

Other vendors such as Koru and Pymetrics also use algorithms in various ways to help companies circumvent bias in the hiring process. Koru uses surveys to identify employees’ strengths and weaknesses and has its software identify people with the same traits, while Pymetrics uses a combination of gamification and neuroscience to identify people who may be best fits for a certain job.

HireVue, which made its name as one of the earliest video-interviewing platforms, uses “emotion detection systems” to screen the faces of video interviewees to evaluate them using models its created based on a company’s top-performing employees. Although the intent is to remove bias from the hiring process, critics have questioned whether this approach really does what HireVue says it can.

“[HireVue’s system] is alarming, because firms that are using such software may not have diverse workforces to begin with, and often have decreasing diversity at the top,” Meredith Whittaker, co-founder of New York University’s AI Now Institute and founder of Google’s Open Research group, told CNBC.

“And, given that systems like HireVue are proprietary and not open to review, how do we validate their claims to fairness and ensure that they aren’t simply ‘tech-washing’ and amplifying longstanding patterns of discrimination?”

Loren Larson, HireVue’s chief technology officer, told CNBC “It is extremely important to audit the algorithms used in hiring to detect and correct for bias. No company doing this kind of work should depend only on a third-party firm to ensure that they are doing this work in a responsible way … it’s the responsibility of the company itself to audit the algorithms as an ongoing, day-to-day process.”

Companies such as IBM have responded to concerns about bias by making changes to their software. IBM recently unveiled a new dataset that’s designed to train facial recognition to see more skin colors. The dataset, which contains 36,000 images from Flickr Creative Commons, is designed to make facial recognition more accurate, the company said.

All companies that use AI for talent acquisition and management should do what they can to guard against bias, says Nathan Mondragon, HireVue’s chief IO psychologist.

“It’s not the algorithms that are biased, it’s the data that’s going into it,” he says. “If people aren’t checking that, it could be a problem.”

He cites the example of data scientists who were trying to develop software that could properly classify wolves versus huskies. They thought they’d been successful in coming up with an algorithm that was 90-percent accurate in sorting wolves from huskies—until they examined what the algorithm was focusing on. It turns out that it was classifying the animals based on whether there was snow in the background of the pictures it was analyzing—it had nothing to do with whether they were actually wolves or huskies, says Mondragon.

“If we’re not paying attention to the features the computer is flagging to show the difference between a good and bad performer, for example, then it could turn out it’s using factors such as racial characteristics, things that have nothing to do with job performance,” he says.

Although vendors should have the primary responsibility for ensuring the algorithms they’re using are fed good data, HR and talent-acquisition leaders can also be on the lookout for adverse impact, says Mondragon.

“Run the numbers, don’t just take things at face value,” he says. “Make sure that to get to ‘X,’ you’re not getting race, age and gender differences. It’s not that hard to run those calculations, but people that are doing it right will be more than happy to help you.”

▼提供記事はこちら

プロフィール

グラマシー エンゲージメント グループ株式会社 代表取締役 ブライアン シャーマン

米国ニューヨーク市人事コンサルティング会社にて日系企業(NY・LA)を対象に人事コンサルタントとして従事。その後米国住商情報システムにて人事総務部長に就任。在米日系企業が抱える人事の現場を内と外の視点で支える。来日後は株式会社ファーストリテイリングでのグローバル人事業務に参画。欧米露アジア拠点の人事マネジメント業務に従事。2010年グラマシーエングージメントグループ株式会社設立。現在は、日本企業の人事のグローバル化をサポートするコンサルタントとして活躍中。

米国ニューヨーク州出身、米国Williams College卒業
2007年にSHRM、Senior Professional Human Resources (SPHR)資格取得、早稲田大学トランスナショナルHRM研究所 招聘研究員

HR Technology Conference & Expositionの人気記事

まだデータがありません。