The move towards monitoring HR tools and applications for bias is gaining traction worldwide, driven by various global and domestic data privacy laws and the US Equal Employment Opportunity Commission (EEOC). In line with this trend, the New York City Council has enacted new regulations requiring organizations to conduct yearly bias audits on automated employment decision-making tools used by HR departments.
The new rules, which passed in December 2021 with enforcement, will require organizations that use algorithmic HR tools to conduct a yearly bias audit. As per the new law, noncompliant organizations may face fines ranging from no less than USD 500 to no more than USD 1500 for each violation.
To prepare for this shift, some organizations are developing a yearly evaluation, mitigation, and review process. Here’s a suggestion for how that might work in practice.
Step one – Evaluate
To have their hiring and promotion ecosystems evaluated, organizations should take an active approach by educating its stakeholders on the importance of this process. A diverse evaluation team consisting of HR, Data, IT, and Legal can be crucial to navigate the evolving regulatory landscape that deals with AI. This team should become an integral part of the organization’s business processes. Their role is to evaluate the entire sourcing-to-hiring process, and examine how the organization sources, screens and hires internal and external candidates.
The evaluation team should assess and document each system, decision point, and vendor by the population they serve, such as hourly workers, salaried employees, different pay groups, and countries. Although some third-party vendor information may be proprietary, the evaluation team should still review these processes and establish safeguards for vendors. It is crucial that proprietary AI is transparent, and the team should work to include diversity, equity, and inclusion in the hiring process.
Step two – Impact testing
As governments around the world implement regulations regarding the use of AI and automation, organizations should evaluate and revise their processes to address compliance with new regulations. This means that processes utilizing algorithmic AI and automation should be carefully scrutinized and tested for impact according to the specific regulations in each state, city, or locality. With numerous rules varying in degree, organizations should stay informed and comply with the requirements to avoid any potential legal and ethical consequences.
Step three – Bias review
After the evaluation and impact testing are complete, the organization can start the bias audit, which should be conducted by a neutral algorithmic institute or third-party auditor and can be required by law. It is important to choose an auditor that specializes in HR or Talent and trustworthy, explainable AI, and has RAII Certification and DAA digital accreditation. Our organization is ready to assist companies in becoming data-driven and addressing compliance. If you need any help, feel free to contact us.
Data and AI governance’s role
A proper technology mix can be crucial to an effective data and AI governance strategy, with a modern data architecture such as data fabric being a key component. Policy orchestration within a data fabric architecture is an excellent tool that can simplify the complex AI audit processes. By incorporating AI audit and related processes into the governance policies of your data architecture, your organization can help gain an understanding of areas that require ongoing inspection.
At IBM Consulting, we have been helping clients set up an evaluation process for bias and other areas. The most challenging part is setting up the initial evaluation and taking inventory of every piece of technology and each vendor the organization works with to find automation or AI. However, setting our HR clients up on a data fabric can help to make this step smoother. A data fabric architecture offers transparency into policy orchestration, automation and AI management, while monitoring user personas and machine learning models.
Organizations should understand that this audit is not a one-time or isolated event. It’s not just about the regulations a single city or state is enacting. These laws are part of a continuing trend of governments stepping in to mitigate bias, establish ethical AI use, make sure private data stays private, and to reduce the harm done when data is mishandled. Therefore, organizations must budget for compliance costs and assemble a cross-discipline evaluation team to develop a regular audit process.
The post Global executives and AI strategy for HR: How to tackle bias in algorithmic AI appeared first on IBM Blog.