Global hiring tech vendor Oyster treads carefully with AI
The HR tech vendor, which serves companies looking to hire employees globally as part of the distributed workforce, is taking a cautious approach to AI, especially generative AI.
Oyster is keeping its distance from the generative AI craze, at least for now.
When the vendor, whose platform helps companies with hiring, paying and managing employees in 180 countries around the world, recently came out with a new chatbot, Pearl, it fueled it with basic conversational AI, not the generative variety.
That's largely because Oyster wanted to skirt generative AI's well-known risks of outputting inaccurate and biased information, said Michael McCormick, senior vice president of product and engineering at Oyster, on this week's episode of TechTarget Editorial's Targeting AI podcast.
The vendor is a certified B Corporation with a mandate to focus on social and environmental performance.
"One of the big problems with generative AI that everyone knows about is the tendency it can hallucinate," McCormick said. "We've seen examples of people resting control away from the intent of the generative AI programmers and convincing the generative AI to do and say all sorts of awful things.
"And there is not enough data capturing the experience of underserved and underrepresented groups," he added. "And so there's a huge amount of risk if you try to have guidance from systems like that in the HR space."
Pearl is Oyster's first public foray into using AI to interact with users of its platform. Essentially, the chatbot answers, in conversational format, questions about hiring and remote employment regulations in a world of distributed work in dozens of remote countries.
The chatbot is trained on Oyster's wealth of static information about global HR policies, taxes and benefits. It essentially functions as a private large language model, with Oyster employees serving as "humans in the loop" to ensure Pearl gives simple, consistent and accurate advice, thus further minimizing generative AI risk.
"If you give an individual the ability to have a direct conversation with a generative AI, you give up control of what might happen," McCormick said. "And you're at the mercy of OpenAI or Bard or whomever in terms of how they try to control that."
Shaun Sutner is senior news director for TechTarget Editorial's enterprise AI, business analytics, data management, customer experience and unified communications coverage areas. Esther Ajao is a TechTarget news writer covering artificial intelligence software and systems. Together, they host the "Targeting AI" podcast series.