ChatGPT used overseas exploits to moderate its language library, investigation finds

Popular OpenAI chatbot, strangely human ChatGPT was built on the backs of underpaid and psychologically exploited employees, according to a new investigation by TIME.

A data labeling team based in Kenya, managed by the San Francisco firm The sameIt is said that it is not only paid a shocking salary that does the work for a company that may be on track to receive a $10 billion investment from Microsoftbut has also been subjected to disturbing graphic sexual content to clean up ChatGPT of dangerous hate speech and violence.

SEE ALSO:Gas, the complimenting app, has been bought by Discord

Starting in November 2021, OpenAI sent tens of thousands of text samples to employees, who were tasked with combing passages for cases of child sexual abuse, bestiality, murder, suicide, torture, self-harm and incest. TIME reported Team members talked about having to read hundreds of these types of entries a day; for hourly wages ranging from $1 to $2 an hour, or a monthly salary of $170, some employees felt that their jobs were “mental scars” and a certain type of “torture.”

Sama employees were offered wellness sessions with counselors, as well as individual and group therapy, but many employees interviewed said the reality of mental health at the company was disappointing and inaccessible. The company responded that they took the mental health of their employees seriously.

U TIME The investigation also discovered that the same group of employees was given additional work to compile and label an immense set of graphics – and what appeared to be increasingly illegal – images for an undisclosed OpenAI project. Sama ended its contract with OpenAI in February 2022. In December, ChatGPT will sweep the internet and take over chat rooms as the next wave of innovative AI.

At the time of its launch, ChatGPT was noted for having a a surprisingly comprehensive avoidance system in place, which has gone a long way in preventing users from bypassing the AI ​​to say racist, violent or other inappropriate phrases. It also flagged text it deemed bigoted in the chat itself, turning it red and giving the user a warning.

The Ethical Complexity of AI

While the news of OpenAI’s hidden workforce is disconcerting, it’s not entirely surprising that the ethics of human content moderation is not a new debate, especially in social media spaces that play with the lines between free publishing and the protection of their user bases. In 2021, the New York Times informed about Facebook’s outsourcing of post moderation to an accounting and labeling company called Accenture. Both companies outsource moderation to employee populations around the world and would later face a massive fallout from a workforce psychologically unprepared for work. Facebook paid a $52 million settlement to injured workers in 2020.

Content moderation has also become the subject of psychological horror and post-apocalyptic technological media, such as the thriller 2022 by Dutch author Hanna Bervoets. We had to delete this post, which chronicles the mental turmoil and legal turmoil of a company quality assurance job. For these characters, and the real people behind the work, the perversions of a future based on technology and the Internet are a lasting trauma.

The rapid takeover of ChatGPT, and the subsequent wave of AI art generators, poses many questions to a general public increasingly willing to hand over their data, social and romantic interactions, and also cultural creation to technology. Can we rely on artificial intelligence to provide current information and services? What are the academic implications of text-based AI that can respond to real-time feedback? Is it unethical to use the work of artists to build new art in the computer world?

The answers to these are obvious and morally complex. The chats are not precise knowledge repository or original ideas, but offer an interesting Socratic exercise. They are rapidly expanding ways for plagiarism, but many academics are intrigued by their potential as tools of creative incitement. The exploitation of artists and their intellectual property is a growing problem, but can it be avoided for now, in the name of the so-called innovation? How can creators build security into these technological advancements without risking the health of the real people behind the scenes?

One thing is clear: The rapid rise of AI as the next technological frontier continues to pose new ethical dilemmas on the creation and application of tools that replicate human interaction at a real human cost.

If you have experienced sexual abuse, call the toll-free, confidential National Sexual Assault Hotline at 1-800-656-HOPE (4673), or access 24-7 help online by visiting online.rainn.org.

Leave a Comment