OpenAI has quickly become the hot commodity for 2023 tech enthusiasts, namely due to ChaptGPT.
The software, which offers one of the most advanced public chatbots in history, has been described as the end-all solution to a variety of everyday challenges and problems. Even colleges have brought the hammer down on such technology after it was discovered that students were using it to write fully-formulated essays.
However, a new exclusive from Time Magazine finds that OpenAI outsourced workers in Kenya to identify toxicity issues within ChatGPT — for a measly $2 an hour.
One of the company’s major hurdles has been ChatGPT blurting offensive, often discriminatory remarks to users. In an effort to curb this, OpenAI sought the help of Sama, a San Francisco-based company that employs workers across Kenya to comb through thousands of text snippets to identify toxic language.
However, many employees are coming forward about the extremely graphic and disturbing content they were forced to face, ranging from sexual abuse to murder.
“Classifying and filtering harmful [text and images] is a necessary step in minimizing the amount of violent and sexual content included in training data and creating tools that can detect harmful content,” an OpenAI spokesperson said.
While this may be true, workers have stated that the exploitative pay and lack of concern for employee wellness have deepened the trauma employees experienced.
“You will read a number of statements like that all through the week,” one anonymous Sama worker told Time. “By the time it gets to Friday, you are disturbed from thinking through that picture.”
However, the spokesperson added that employees had access to “professionally-trained and licensed mental health therapists” at any time, and that it wasthe responsibility of local managers to support employee wellness.
In February of 2022, Sama ended its relationship with OpenAI after being tasked with another graphic-related job, including collecting violent images that are illegal in the US.