Artificial Intelligence: Kenyan Workers Subjected To Trauma When Labeling Explicit Content For OpenAI

Training the OpenAI ChatGPT model to avoid explicit content took a huge mental toll on the human beings who labelled such images and text as offensive.

Richard Mathenge, a Kenyan worker, endured a traumatizing experience while training the OpenAI GPT model. Assigned to a team responsible for teaching the AI about explicit content, Mathenge and his colleagues spent hours each day categorizing and labeling offensive and disturbing texts. The material they encountered included descriptions of child sexual abuse, incest, bestiality, and other explicit scenes. The toll of this work went largely unnoticed, overshadowed by the technical effectiveness of the AI training process.

Mathenge, who had previously worked in customer service, initially saw his role as meaningful and promising. However, the constant exposure to explicit content took a severe emotional and psychological toll on him and his team. The distress caused insomnia, anxiety, depression, panic attacks, and strained personal relationships. Despite OpenAI’s claims of providing routine counseling, Mathenge and his colleagues found the support insufficient and the counselor inexperienced. (Slate)

OpenAI stated that it takes the mental health of its employees seriously and had relied on the practices of its contractor, Sama, to provide wellness programs and counseling. However, the workers felt that the counseling offered was inadequate. OpenAI sought more information from Sama regarding working conditions but learned that Sama was exiting the content moderation space.

Mathenge and his colleagues derive satisfaction from the efficacy of their work, as the AI model has achieved the ability to effectively prevent the creation of explicit content and provide warnings regarding potentially unlawful actions. Nevertheless, they continue to suffer from the trauma inflicted during their training.

Mathenge, grateful for employment during a difficult economic period, hopes that the tradeoff was worth it despite the personal cost. Meanwhile, despite OpenAI’s assertion to the author that it believed it was compensating its Sama contractors at a rate of $12.50 per hour, Mathenge and his colleagues claim that they received an approximate payment of $1 per hour, and occasionally even less.

Related Story:  An Investigation By Time Reveals The Dark Side Of Training Chatbots On Offensive Content

Photo by Zac Wolff on Unsplash

Free Industry News

Subscribe to our free newsletter for updates and news about alternatives investments.

  • This field is for validation purposes and should be left unchanged.


Latest Alternative Investment News
Artificial Intelligence: AMD Takes On Rivals In The AI Chip Sweepstakes
December 7, 2023     Artificial Intelligence, News

Chipmaker AMD (NASDAQ: AMD) has unveiled a range of innovative AI solutions spanning from data centers to personal computers. The AMD Instinct MI300 Series features data center AI accelerators, while…
Digital Assets: Robinhood Debuts Crypto Trading On Its App In The EU
December 7, 2023     Digital Assets, FinTech, News

Robinhood (NASDAQ: HOOD) has launched its Crypto app in the European Union (EU), allowing eligible customers to engage in crypto trading with the added incentive of earning Bitcoin rewards. Customers…
FinTech: Samsung Electronics Ties With Mastercard’s Wallet Express
December 7, 2023     FinTech, News

Samsung Electronics (KRX: 005930) and Mastercard (NYSE: MA) have partnered to launch the Wallet Express program, offering banks and card issuers a cost-effective way to expand digital wallet offerings. Through…
Venture Capital: Revaia, Europe’s Biggest Female-Led VC Firm, Racks Up $160M For Second Fund
December 7, 2023     ESG and Sustainability, News, Venture Capital

Revaia, Europe’s largest female-founded venture capital firm, has successfully raised €150 million ($160 million) for its second fund, Revaia Growth II. The funding was secured from sovereign wealth funds, family…