The butterfly effect: algorithmic bias and the search for talent

Does the growing use of automated HR pose a new danger to the fight for equity and inclusion?

It is a fact that the hiring process is often dogged by bias. Of course, hiring managers do not reject applications from minority candidates, but unconscious biases have been shown time and again to skew hiring decisions.

Diverse applicants are acutely aware of the risks.

‘Back in the early 2000s, I found that it was very easy to identify who diverse candidates were’, says Phyllis Harris, general counsel, chief compliance, ethics and government relations officer, at the American Red Cross.

‘You could pick it up from a school, a name, you might pick it up from the organization. What is interesting is that, over time, I have seen a 180° whitewashing in résumés. There are candidates out there that believe that they are not going to get an opportunity to interview if someone can readily identify their racial make-up and their racial identity. I know people who have gone as far as changing their first name because they did not want to stand out.’

‘We like to talk about inauthenticity. How inauthentic are you if you have to make a conscious decision to say this résumé cannot in any way reflect that I am African American or that I am native American? You are going into a workplace and, on the very first day, you are not comfortable in being yourself.’

But what if you could remove bias from the hiring process? Some claim that you can.

Automated hiring introduces technologies such as artificial intelligence (AI) and machine learning (ML) into the recruitment process, helping to pre-screen résumés that do not meet requirements. In some cases, these tools are now being used to eliminate the need for a human interviewer to interact with prospective new hires. If the process is less human, there will be less scope for human preconceptions to overlook the best talent. Right?

Not according to Ifeoma Ajunwa, associate professor of Law at UNC School of Law, and founding director of the school’s artificial intelligence and decision-making research program. She has seen workplaces embrace algorithm-based technologies for convenience and cost-savings in the recruitment process – as well as in other areas such as training, safety or productivity monitoring and surveillance – filtering out high volumes of applicants, even in white-collar roles. But she has grave misgivings about its potential for discrimination.

‘Automated hiring is seen by a lot of white shoe firms and big companies as an anti-bias intervention. A lot of these firms are turning to automated hiring in good will, with good intentions – they think it’s a way to ameliorate the bias of hiring,’ she explains.

‘The problem, however, is that because hiring platforms are not well regulated, the reality is that they are in fact replicating the bias that already exists and they could actually be exacerbating it. If you have a flaw in an automated hiring system, that flaw is prone to be replicated a million times, versus when you have one biased manager.’

Disparate impact

The problem is that of disparate impact – apparently neutral processes whose outcomes are nevertheless harmful – where a butterfly-wing beat of inattention, ignorance or worse, can set off a tornado of exclusion.

Ajunwa gives the example of video interviewing platforms that use algorithms to measure variables like eye contact or syntax, potentially impacting candidates with autism or deafness, she says.

Closer to home, she describes the case of a former student, a computer science PhD postgrad who was rejected by a major company. On meeting with a company representative at a job fair, it became clear that her skills were perfect for the organization, and, in surprise, the representative investigated. It transpired that an automated program had screened out applicants with a BA in computer science, in favour of those with a BS.

‘It’s not really a meaningful difference. But then you also have to think about, could this have other disparate impacts? Who’s more likely to get a BA in computer science versus a BS? Could this actually have bigger disparate impacts than we know?’ she says.

‘There won’t always be quick fixes.’

Where the puck is going

Ajunwa believes that such consequential decisions being left to technology merits legal attention – and she pinpoints in-house lawyers as playing a potentially significant role in such scrutiny.

‘I think in-house lawyers are particularly at the forefront of the scene, because some of them may be general counsels advising clients on what sort of hiring programs to put in place. So they do need to be aware of these issues with automated hiring because many vendors will be trying to sell automated hiring programs to large firms, and there really is the duty of these general counsels to ensure that the client isn’t taking on a program that is going to be more of a liability than a help,’ she says.

‘At the bottom line, there’s a huge liability if companies get these issues wrong. There’s an important legal liability,’ adds Nuala O’Connor, senior vice president and chief counsel, Digital Citizenship at Walmart.

The extent of the exposure to that liability won’t be evident until the process is run, and the output – potentially reflective of biases along racial, ethnic, national origin and other discriminatory lines – there for all to see. But that’s the baseline, says O’Connor.

‘I’ve always believed that the role of in-house counsel is not just to say what the law is, but to lift up the company and say, “This is what the right thing is”. The role of the savvy in-house counsel when dealing with emerging technology is a little like the ice hockey saying – you’re looking where the puck is going, not where the puck is. This is where the puck is going. Increased use of technology, increased use of data, increased scrutiny. Increased scrutiny of the values,’ she explains.

‘Just as companies are being scrutinized for the values they espouse and having to take positions on controversial social issues, the technology and the data uses of the company are going to be scrutinized for the values inherent in those programs, policies, processes. I think that the smart lawyer and the smart counselor inside is looking at: “what have we built? How is it working, how is it functioning?” This is the digital architecture of a company, just like the four walls of a store are the physical architecture of our company, and it demands as much scrutiny.’

Joining the dots: technology, law and discrimination

Lauren Krapf, technology policy and advocacy counsel at the ADL, and counsel for its Center for Technology and Society, discusses her work combating identity-based hate online

We know digital discrimination really impacts individuals, especially when it comes to their identity and, specifically, race and ethnicity. Algorithmic bias exists in our digital financial systems, employment systems, and other systems. It tends to mirror the discrimination that individuals experience offline when it comes to disparate impact and harm to marginalized communities. This manifests specifically for race and ethnicity, but also for religious minorities, for the LGBTQIA+ community and other targeted populations. ADL has spent a lot of our time and focus on online identity-based harassment and hate that is seeded through social media and other digital mechanisms. In addition to working to protect targets of identity-based online harassment, ADL’s tech policy work focuses on platform accountability.

In 2017, ADL launched our Center for Technology and Society (CTS), where we have housed experts who focus on research related to hate online and identity-based harassment. CTS works with, and pushes back against, social media platforms in light of the role they play in magnifying hate, extremism, racism and harassment. We have policy experts and engineers building tools to measure hate online and its impact.

AT ADL, we research the ways hate and extremism, white supremacy, and harassment manifest in digital spaces, and then engage in advocacy so policymakers and lawmakers can help develop sustainable and meaningful solutions to mitigate the harms that exist.

My portfolio includes supporting ADL’s initiatives from a legal and analytical perspective: what are the legislative and regulatory solutions being proposed? Can these solutions make a positive impact? What are unintended consequences? Can these proposals be improved?

We’re looking to case law, to precedent, and understanding what language is necessary to update the gaps and loopholes in the law, talking to victims and targets about the struggles they’ve had and their ability – or inability – to bring cases forward, talking to other legal experts about their theories of change.

When it comes to supporting targets of online harassment, through ADL’s Backspace Hate campaign, we’ve worked with law makers in several states to introduce anti-doxing laws, to update cyber harassment laws or cyber stalking laws, and to introduce swatting laws, because in many states these laws are out of date or don’t exist at all.

When it comes to holding tech platforms accountable for their role in amplifying hate, racism and extremism, ADL is fighting for meaningful internet reform. We are seeing the magnification of hate and the normalization of racism on social media at unprecedented levels and know that tech platforms can do more to stop the proliferation of these harms. ADL believes Section 230 [of the Communications Decency Act, which prevents platforms being liable for content posted by third parties] needs to be reformed. But we know that is just one piece to the puzzle. ADL’s REPAIR Plan lays out the different components to pushing hate and extremism back to the fringes of the digital world. This includes things like changing the incentive structures for Big Tech’s toxic business model, increasing transparency and accountability for platforms, and advocating for targets of online hate.

I think these really important questions of our time are rooted in a mix of tech expertise, law, human stories and impact, and so I feel grateful that I get to put the puzzle pieces together, and assist when ADL supports a piece of legislation, or works directly with law makers to champion work, or when we talk to our community members and leaders about what they can do. And really just having conversations in communities and raising awareness.

ADL also works directly with law firms that support our impact work. They help us our policy and civil rights teams with amicus briefs as well as other research projects. As we look to what’s next, or as we look to deeper understand nuances within some of the issues, la firms can provide meaningful assistance. We’re lucky to have a network of dedicated lawyers, law firms and stakeholders to lean in and support the work that we’re doing.

I think it would be great for general counsel and other inhouse-lawyers to find more ways to work with civil society. There’s no one-size-fits all piece to the puzzle. As for tech policy reform, we are asking serious questions about the best way to regulate the tech industry: business structures, antitrust reform, privacy regulation, liability, whistle blower protections, etc. There’s not one single way to fix the internet. It’s not just rooted in the outward-facing policies of tech companies, or the laws that are moving through the legislature. I think that we can really develop creative solutions when partners who have different skillsets, that can look at issues from their lens, get involved.

Lisa LeCointe-Cephas, chief ethics and compliance officer at Merck International, has not worked with automated HR systems, but she is conscious of her responsibility to consider where bias might hide in processes and tools.

‘Last week I took one of our new diversity, equity and inclusion trainings from the perspective of a black woman – “How does this read for me?” I’m very vocal about providing feedback on that, because I do think that a lot of the things that we’re building to help us, at times, can actually have biases built into them.’

She believes this awareness should also be applied to potential algorithmic bias within automated tools.

‘I’m fascinated by a lot of that work. There are these new platforms that allow you to say, “You know what? Bob is great. We want to get another Bob. And so, what are the characteristics of Bob that allow us to get another person who will be just successful?” I do worry that a lot of those things have inherent biases built into them,’ she says.

But Ajunwa feels that many in-house lawyers are not truly cognizant of the risks.

‘I think, frankly, that maybe in-house lawyers are just not as technologically astute or as technologically attuned to the nuances of how a lot of automated tools work – such that automated tools that might seem innocuous or that might be neutral could, for example, still have proxy variables that they are using, which are making them ultimately unlawfully discriminatory,’ she explains.

Digital ethics and the law

At Walmart, O’Connor is one of a growing band of corporate counsel looking at the intersection between ethics and the evolving digital ecosystem inside companies. Former president and CEO of civil liberties non-profit the Center for Democracy and Technology, O’Connor now heads up a new function called ‘Digital Citizenship’, which includes lawyers, compliance and policy professionals, as well as technologists and data architects. The organization also includes a program team called ‘Digital Values’, specifically tasked with scrutinizing values embedded in AI and ML.

Legal and digital ethics skills are increasingly at a premium, she says, putting the field in the context of mounting regulatory and corporate attention being paid to technology and data practices.

‘I do think it’s a growing field. It’s looking at technology through that civil liberties lens and through that fairness lens and being a lawyer who can help do that, I think is going to be an incredibly valuable skill,’ she says.

‘We talk a lot about the role of privacy, and I sometimes cringe a little bit because I don’t think it’s just privacy anymore. Data protection is necessary but not sufficient. It’s got to be secure; it’s got to be safe, it’s got to be held to the standards and promises that were made when the data was collected. But the use of that data has also got to be fair, it’s got to be transparent, it’s got to be accountable.’

So, how can companies distinguish between platforms selling what O’Connor terms ‘digital snake oil’ and genuinely helpful tools? And how can they make sure that genuinely helpful technologies do not amplify bias and exclusion?

Authentic auditing

For Ajunwa, the answer is auditing – an ongoing, authentic process, where disparate impacts are not only identified, but corrected. And she believes it should fall to general counsel to ensure that appropriate professionals are hired.

‘Ultimately, if there is an issue of discrimination, the employer is still responsible. The mere fact of delegating hiring to automated hiring platforms does not remove the legal responsibility to refrain from employment discrimination,’ she warns.

Some employers are sitting up and tackling the risks. In December 2021, the Data & Trust Alliance – a non-profit global consortium of leading businesses formed in September 2020 – announced a set of criteria to mitigate data and algorithmic bias in HR and workforce decisions. In recognition of the risks inherent in its members’ increased application of data, algorithms and AI in the search for talent, these ‘Algorithmic Bias Safeguards’ are intended to help companies assess vendors on criteria such as training data and model designs, bias testing methods and bias remediation, among others. They will be used by companies like American Express, CVS Health and Walmart.

O’Connor has been closely involved, since the Alliance knocked on her door to leverage the skills of her team.

‘It really was a top-down, CEO to CEO conversation that these companies are waking up to the realization they have a lot of information and data about people, they wanted to do good things with it, better things than I think the public perception is of what the amalgamation of data could mean for their personal lives,’ she says.

‘Most importantly, this is not just: “let’s put up a page of principles on the wall and it’s all good.” We’ve all done that as companies already; that’s a baseline. This is: “how do we implement? How do we operationalize? What are the questions we have to ask our vendors? What are the processes we should put in place?”’

O’Connor co-chairs the Alliance’s algorithmic decision-making in HR working group, which met weekly to consider how automated HR tools should be evaluated for bias and implemented across the HR, procurement and IT departments. The safeguards are the culmination of that project, including checklists for vendors, anti-bias diagnostic tools and metrics for outcomes.

‘Here are all the values, here are the ways you should be scrutinizing your tools, here are the places in the company that these tools might be implemented, whether that’s in hiring of new people, or an area I think that’s overlooked is advancement and promotion – who are we looking at when we’re creating new jobs or promoting people and who is getting to go on learning opportunities?’ she explains.

Joining the dots: technology, law and discrimination

Mika Shah, co-acting general counsel of Mozilla, describes the company’s push for transparency to combat discrimination in the digital adsphere, and how the legal team supports that work

When a company advertises online, they can choose the targeting parameters – the interests, demographics, and behavior – of their potential audience. Advertising platforms often have extremely granular targeting options and, although this can be useful in some situations to find audiences who would be interested in the topic or product, it can also easily be used to discriminate against certain segments of society.

Some examples of the former may be culturally specific products or events – some people might find value from this, whereas others would find it creepy, but not necessarily harmful. By contrast, harmful discrimination through advertising is happening today, although experts are less certain about how ubiquitous the practice is and how much harm it does to individuals and groups. Examples of harmful discrimination online include sending digital reminders of private and traumatizing experiences, engaging in voter manipulation, provoking actual conflict, targeting certain genders in hiring, excluding certain races or households from housing opportunities, and so forth.

The regulatory environment to prevent online discrimination and harm is dated. There are laws in the US and other countries to prevent certain types of discrimination, but they do not address these types of digital practices and the harms that result.

That is why Mozilla advocates for stronger transparency requirements by advertising platforms. This is an opaque space in which platforms hold the relevant information. Without this, we can’t have meaningful public discourse on how to prevent digital discrimination and harm. Mozilla also advocates for safe harbors and protections for researchers and journalists studying discrimination and harm online. This is critical to enable a better understanding of what practices are having discrimination impacts, and the nature of those impacts. It is a key step to better empower regulators to take action.

On the other side of the debate around platform transparency is how advertisers should behave. Many companies engage in routine advertising and they may be engaging in practices that seem to be ‘permissible’ but may in fact be discriminatory. As the regulatory environment evolves, legal teams should proactively ensure that their targeting and data practices can withstand regulatory and consumer scrutiny. This includes going beyond the letter of the law, especially when laws don’t yet exist, to also consider whether a company’s practices are in line with brand values and public trust.

We’re encouraged by recent Congressional proposals on ad transparency and researcher access. A meaningful solution to enable transparency includes the entire toolkit: universal ad transparency, safe harbor for researchers and journalists, and disclosure of high engagement data. Of course, we must carefully craft these rules in a manner that preserves privacy protections. We’ll continue to advocate alongside civil society and academia for thoughtful and effective policy approaches and encourage governments to act worldwide.

Beyond our public policy work, we also have great products that we wish for everyone to know about. That means we can’t ignore advertising either. We’ve gone from pulling our advertising from Facebook to returning, with build-in transparency into the process ourselves. Our legal team approaches this issue from every angle and collaborates with our internal business partners to understand challenges and implement solutions in line with our values. In this way, legal teams are an important stakeholder to helping companies navigate beyond what the law ‘requires’ (or doesn’t) to what is also the right thing to do for consumers and society.

‘It can be as granular as: let’s look at the code and see what it does, or as general as: here are five questions you should ask your vendor. And I think it’s tailorable to small, medium and large enterprises.’

O’Connor describes other Alliance projects �� one on the use of data and technology in M&A due diligence, to look at how data and technology assets are considered in the acquisition or divestiture of companies, as well as the transfer of privacy policies and commitment when entities change form.

Another studies responsible personalization, examining the values input into tailored online experiences, and the values behind the exposure of people to advertisements for products as diverse as clothing, mortgage loans or educational opportunities.

For companies able to successfully audit and uncover bias and unlawful discrimination, says Ajunwa, perhaps the stickier question is how to go about correcting it.

‘Sometimes you might think you’re using variables that are quite important for the job description or for your industry, but those factors or variables may perhaps have historically racial bias baked in,’ she says.

‘That then becomes a bigger conversation on how you disentangle variables that are widely used in your industry from racial bias.’

For O’Connor, that conversation has to prize humanity above isolated attributes: ‘It’s not just the data, it’s the decision. What is the judgment that you are applying to a human being and is that fair, is that honouring their full humanity, or is that diminishing them to some kind of stereotype or corner of the universe? I’m a believer that technology can be used to open people’s minds and horizons, but we do need to scrutinize it to make sure that’s actually what’s happening.’

Regulation

Whatever the conversations happening in-house, Ajunwa argues that auditing should also come from outside – from interventions by government agencies like the Federal Trade Commission (FTC) and the Equal Employment Opportunity Commission (EEOC).

At the state level, interest is growing and December 2021 saw a bill regulating the use of AI by employers become law in New York City, effective from January 2023. The bill requires companies to conduct a bias audit on automated employment decision tools, to notify candidates living in the city that such a tool will be used and which qualifications and characteristics it will assess, and provide the opportunity to request an alternative process.

At the federal level, the EEOC announced the launch of an initiative to ensure that AI and other hiring and employment decision-making tech tools do not discriminate unlawfully in October last year.

‘My greatest wish is for governmental agencies to really take more charge of regulating this arena of automated decision-making,’ says Ajunwa.

For now, the EEOC plans to prepare technical assistance to provide guidance on algorithmic fairness and AI use in employment decisions. But, Ajunwa believes that judicious use of similar technology could actually assist in the process of creating and enforcing regulation.

‘I think they can do it by actually deploying some automated decision-making tools themselves. The EEOC could use automated decision-making tools to search for disparate impacts from automated hiring platforms or other online platforms. The FTC, which is in charge of the Fair Credit Reporting Act, could actually use that to ensure applicants get more information about their online applications – basically they can use it to reverse the current informational asymmetry between employees and employers when it comes to automated hiring.’

Stepping out of the silo

For data ethics to exist outside of a vacuum, and for any efforts at mitigating bias and disparate impacts in automated employment decision-making to succeed, they must be part of a collaborative effort, says Ajunwa.

‘There is this siloed effect, where you have ethics organizations off to the side writing papers, doing research, and then you have corporations also off to the side, and there’s not really enough conversation between the two,’ she says.

She hopes that more corporations will collaborate with leading edge social and computer science researchers and legal scholars in evaluating their programs and flagging issues.

‘I know of companies using what they claim is research to create automated hiring programs, but it’s research from the 1960s. Yeah, sure it’s research, but I can guarantee you how dated, probably racially and gender biased, and so on that will be,’ she says.

The Data & Trust Alliance has also taken a multi-disciplinary approach: O’Connor’s working group brings together professionals from the fields of law, technology, procurement and diversity. The resulting algorithmic bias vendor evaluation criteria were developed by a coalition of member companies, with input from vendors, business, academia and civil society.

At UNC Law School, Ajunwa has worked to broaden the skillset of lawyers themselves, founding the Artificial Intelligence and Decision-Making Research Program in recognition of the need for a new generation of lawyers to understand not only the nuances of the law, but also the capabilities and potential of emerging technology – and what legal issues could arise.

To ensure the message is reaching employers, Ajunwa also sits on a technology advisory council for a Fortune 500 company, providing expert ethical opinion and advice on products and relationships.

‘Currently our model of dealing with discrimination in the United States is flawed. We have a litigation paradigm where you are basically putting out fires after they’ve already started. I think general counsels should counsel their corporations to rethink that model and not wait until they are faced with litigation to address potential issues,’ she says.

‘Instead of waiting to get a lawsuit and then scrambling to defend themselves, maybe they should have advisory councils already in place so that they can get concrete and useful advice as they are conceiving and rolling out products and entering into relationships with other corporations or entities such as governments. A tech advisory council is useful for that and can actually spot issues before they start – they can spot hotspots before they turn into a full-blown fire.’