JUNE - 2023CIOAPPLICATIONS.COM8CXO InsightsIN MY VIEWThe Explosion of AI in HRThere is a lot of hype around AI, and the hype has arrived in the HR space. Some hype is warranted - we are in the midst of a technological revolution that has democratized technology. For example, a report from the Stanford Institute of Human-Centered Artificial Intelligence found that between 2017 and 2021, the cost of training an image recognition system decreased from $1,000 to $4.60. Creating a machine learning model that would have taken tens of thousands of dollars of equipment and a half dozen developers in 2013 can now be produced by a data scientist on their laptop in Python. In addition to greater technological availability, organizations now have more data to feed AI models. Most medium to large sized companies have hundreds of thousands of data points in their HR information systems. HR tech vendors with talent intelligence platforms can now access data from a multitude of large companies, as well as data scraped from billions of data points.This revolution has created opportunities for HR functions to apply AI and find operational scale and strategic muscle that was previously out of reach. Unfortunately, what is garnering more publicity are a number of high-profile misuses of AI in the hiring process. Without diving into the specific issues of those cases, this article serves as a primer for HR leaders and subject matter experts to recognize and understand AI ­ what it means, how to think about key surrounding elements to ensure effective and ethical use, and how to think about some underrated use cases.What's Under the Hood?Demystifying AI is key to having a better conversation in HR. Sadly, much of the conversation around AI in HR stems from an incomplete or inaccurate understanding of the technology. First, even the term "AI" can be extremely misleading. The vast majority of AI actually used today falls into a subcategory of AI called machine learning. Machine learning systems are not intelligent by human standards but are powered by a statistical process of pattern-finding (learning).Machine learning systems, or models, have the potential to help us to make more accurate predictions or create smarter tools. For example, if you use google mail, emails that are likely to be spam are automatically flagged. Behind the scenes, Google has created a machine learning model that "learned" through pattern recognition ­ finding the attributes of emails that are most commonly categorized as spam.You can imagine a scenario where you trained a machine learning model, but the emails (the data) you used to train the model set it up for failure. Maybe there weren't a sufficient number of emails for the machine learning algorithm to identify statistical patterns. Even worse, maybe the emails marked as spam were not actually spam! This is what a bad AI model looks like, and as an HR leader/professional, you may already be imagining situations in HR that could have more serious consequences on real people. The HR/AI EcosystemMost "AI" is not actually intelligent by human standards, so we don't need to worry about Skynet enslaving the human race anytime soon. Instead, the risks and pitfalls of AI in HR are more nuanced and more systematic. Treating AI not as an isolated technology but as a broader ecosystem allows the HR leader to ask better questions about the effectiveness and ethics of AI projects. Some key components of the HR/AI ecosystem include the data inputs (training data), the business problem/question, the models, the data scientist, the user, and the subjects (employees/candidates). Any single part of the HR/AI ecosystem can operate as a single point of failure. The bad training data example already given can be especially devastating from a diversity standpoint. If candidate or employee datasets used to train machine JACKSON ROATCH, TALENT MANAGER, CORPORATE TECHNOLOGY AND DIGITAL,WEXDEMYSTIFYING AI FOR HR LEADERSJackson Roatch
< Page 7 | Page 9 >