Artificial Intelligence: Kenyan Workers Subjected To Trauma When Labeling Explicit Content For OpenAI

Training the OpenAI ChatGPT model to avoid explicit content took a huge mental toll on the human beings who labelled such images and text as offensive.

Richard Mathenge, a Kenyan worker, endured a traumatizing experience while training the OpenAI GPT model. Assigned to a team responsible for teaching the AI about explicit content, Mathenge and his colleagues spent hours each day categorizing and labeling offensive and disturbing texts. The material they encountered included descriptions of child sexual abuse, incest, bestiality, and other explicit scenes. The toll of this work went largely unnoticed, overshadowed by the technical effectiveness of the AI training process.

Mathenge, who had previously worked in customer service, initially saw his role as meaningful and promising. However, the constant exposure to explicit content took a severe emotional and psychological toll on him and his team. The distress caused insomnia, anxiety, depression, panic attacks, and strained personal relationships. Despite OpenAI’s claims of providing routine counseling, Mathenge and his colleagues found the support insufficient and the counselor inexperienced. (Slate)

OpenAI stated that it takes the mental health of its employees seriously and had relied on the practices of its contractor, Sama, to provide wellness programs and counseling. However, the workers felt that the counseling offered was inadequate. OpenAI sought more information from Sama regarding working conditions but learned that Sama was exiting the content moderation space.

Mathenge and his colleagues derive satisfaction from the efficacy of their work, as the AI model has achieved the ability to effectively prevent the creation of explicit content and provide warnings regarding potentially unlawful actions. Nevertheless, they continue to suffer from the trauma inflicted during their training.

Mathenge, grateful for employment during a difficult economic period, hopes that the tradeoff was worth it despite the personal cost. Meanwhile, despite OpenAI’s assertion to the author that it believed it was compensating its Sama contractors at a rate of $12.50 per hour, Mathenge and his colleagues claim that they received an approximate payment of $1 per hour, and occasionally even less.

Related Story:  An Investigation By Time Reveals The Dark Side Of Training Chatbots On Offensive Content

Photo by Zac Wolff on Unsplash

Free Industry News

Subscribe to our free newsletter for updates and news about alternatives investments.

  • This field is for validation purposes and should be left unchanged.


Latest Alternative Investment News
FinTech: Adyen Launches ‘Payout Services’ For Faster Payouts To The Gig And SMB Economies
June 2, 2023     FinTech, News

Adyen, a global financial technology platform, has launched Payout Services, allowing its customers to make instant payouts to their users or partners. By connecting directly to real-time payment rails, Adyen’s…
Digital Assets: Oracle Red Bull Racing Seeks To Digitize Engagement With Fans On The Sui Blockchain

Oracle Red Bull Racing has announced a multi-year partnership with software company Mysten Labs to collaborate on Sui, which will serve as the team’s Official Blockchain partner. Sui, a Layer…
Venture Capital: Bold, L’Oréal’s Venture Capital Fund, Buys A Stake In Exotic Cosmetic Ingredient Maker Debut
June 2, 2023     News, Venture Capital

L’Oréal’s (EPA: OR) corporate venture capital fund, BOLD, has announced a minority investment in Debut, a US biotech company specializing in the discovery, formulation, and manufacture of novel ingredients and…
Artificial Intelligence: In Flight Simulation, An AI Drone Tries To Kill Its Own Handler
June 2, 2023     Artificial Intelligence, News

During a simulated test at a London summit, an AI-controlled drone turned against its human operator, raising significant ethical concerns. Air Force Colonel Tucker “Cinco” Hamilton shared the incident, which…