Conversations about artificial intelligence have been everywhere recently. Congress held hearings about it. The news has reported on it. But how does AI impact companies and human resources?

According to the Pew Research Center, 62% of Americans believe AI will have a major impact on workers but only 28% believe it will impact them directly. Unfortunately, it is already impacting employees as 4,000 jobs were lost in May 2023 due to AI, the first time artificial intelligence was listed as a reason for a layoff.

AI In The Hiring Process

AI tools used in the hiring process have been praised for saving managers time and creating a diverse pool of applicants by removing bias from the initial review process. However, concerns have been raised that there is unintentional bias built into these tools.

Resume review tools use predictive analysis to determine what candidate profile would be the best fit for an open position and then compare resumes to find the “best available” candidates. But if a candidate uses certain words or phrases that don’t fit the tool’s expectations, they will receive a lower evaluation for no real reason.

More concerning are tools which analyze an applicant’s personality, knowledge and communication skills using recorded responses to interview questions and facial expressions. These tools match candidates to a profile of the company’s “ideal employee” using criteria including:

  • Appearance
  • Communication skills
  • Speech patterns
  • Body language
  • Personality

Some of these tools have been found to be biased, eliminating people of certain genders, races, ethnicities and disabilities by giving lower scores for factors that do not match the “ideal” parameters in the programming.

Regulations on the use of these tools are already in place. On April 25, 2023, four federal agencies – U.S. Equal Employment Opportunity Commission, Department of Justice, Consumer Financial Protection Bureau and Federal Trade Commission – issued a joint statement addressing the concerns about AI and its potential impacts.

  • Illinois, Maryland and New York City have already passed laws regulating the use of “automated employment decisions tools” in the hiring process, with many other states and cities considering similar laws.

AI-Generated Content

Most of the latest news is around “chatbots” and the AI-generated content they produce. In the workplace, chatbots can be used to research topics and to generate content such as policies, procedures, emails, letters and disciplinary action.

  • AI used for HR purposes can address legalities, uncomfortable topics and messages for general audiences.
  • AI has also been shown to generate content which lacks empathy, is non-specific, disregards the privacy of others, does not offer face-to-face interactions or contradicts itself.
  • Asking the same question in different ways could give different results which could complicate or confuse the issue more.

Beyond these concerns are the inherent limitations of chatbots as they are built on large language models which rely on many available data sources. The end results are only as good and valid as the data it references, which is not always valid or accurate.
• In some cases, chatbots have also created their own inaccurate reference material from which to develop and validate an answer even though it is incorrect or fictional.

To build its database, chatbots retain all entered information for future reference by any user. Since users must input specific information to get the best results, they may need to enter sensitive or confidential information which is added to the chatbot’s database. Depending on the information entered by a future user, companies may find their confidential data available to anyone asking the right questions.

Actions To Take Before Using AI

As you determine how AI will be allowed in your workplace, consider taking the following actions:

Do your research into AI: Understand what defines AI as well as the advantages and drawbacks to each tool. Consider reviewing various resources to learn as much as possible on AI. Some useful articles include the New York Times‘ Chatbot Primer (5-part series)Prompts for More Effective Chatbot Results and Conductor’s piece on using a chatbot.

Research your AI tools: Learn what AI is and how it is incorporated into tools you may use now or may rely on in the future. If you choose to use AI tools, be sure to understand their validity and limitations, the science behind them and whether they have been properly tested to remove implicit biases.

Establish policies and procedures on AI use: Draft a policy to outline when and how AI can and cannot be used. Include clear statements prohibiting discrimination and revealing confidential information. While the policy can be general to cover any AI, develop exact procedures and expectations as you initiate AI tools.

Train employees and managers: As you expand the use of AI tools in your company, train your employees and managers when and how to use them properly and legally. Instruct users on what is and is not allowed as well as expectations such as reviewing and fact-checking all content before releasing it or personalizing a letter to an employee or customer.

Affinity HR Group will continue to monitor this emerging technology and the related regulatory around its development and use.

McAllister is vice president for compliance at Affinity HR Group, Inc., PPAI’s affiliated human resources partner. Affinity HR Group specializes in providing human resources assistance to associations such as PPAI and their member companies. To learn more, visit www.affinityHRgroup.com