We use cookies to enhance your browsing experience and analyze our traffic. By clicking "Accept All", you consent to our use of cookies.. View our Privacy Policy for more information.
Your browser (Internet Explorer) is out of date. Please download one of these up-to-date, free and excellent browsers:
For more security speed and comfort.
The download is safe from the vendor's official website.

Blog:

‍The Dark Side of AI #5: AI's Threat to Journalism, OpenAI's Inhumane Labor Practices, and Social Media's Fake Friend Phenomenon

HUMAN Blog
AI & ML
Gaétan Lajeune
Aug 11, 2023

‍The Dark Side of AI #5: AI's Threat to Journalism, OpenAI's Inhumane Labor Practices, and Social Media's Fake Friend Phenomenon

2 min read

Can we still read trust the news online, or has AI already taken control? AI models save lives, but also sometimes destroy the lives of those who train them... Will social networks further disrupt society by making it more narcissistic through the addition of fake friends? Ready to find out all this and more in The Dark Side of AI#5? 

News Corp's AI-Generated News: A Slippery Slope to Fake News?

Is AI on the brink of replacing editors and journalists? It's a possibility that's being explored in several countries, including Australia. News Corp Australia, one of the biggest media players in the land down under, just announced that they're using AI to churn out up to 3,000 news articles a week! These pieces cover everything from gas prices to local news and weather. It's all pretty mundane stuff, but there's no indication that it's AI-generated. And that's where the danger lies! Since AI models are trained by private companies, they could skew how information is interpreted and presented. It's like fake news by default...

Though the company insists that all news items have been vetted by journalists, there's no concrete way to verify this. We're left to trust that the company has the best intentions for this section, which makes up nearly 55% of its newspaper subscriptions.

The Human Cost of AI: A Disturbing Insight into Workers' Health

Not a day goes by without AI revolutionizing some aspect of our daily lives. Sadly, this revolution sometimes comes at the expense of the people who shape AI models. This was recently voiced by Mophat Okiny, a Kenyan former moderator for OpenAI's ChatGPT model.

Hired by the California-based company Sama, 51 Kenyans were tasked with reviewing up to 700 text passages a day to keep ChatGPT running smoothly. But the job had a dark side, with many texts and images depicting graphic scenes of violence, self-mutilation, murder, rape, necrophilia, child abuse, bestiality, and incest.

Unprepared for the daily onslaught of violence, Mophat and his colleagues endured for several months, earning wages between $1.46 to $3.74 an hour. It wasn't until OpenAI abruptly ended the contract that many realized the toll it had taken on their mental health.

Recognizing that Kenya is a hub for the data-labeling industry and that others may face similar challenges, these moderators are now pushing for legislation to recognize harmful content as an occupational hazard. As of now, OpenAI has not responded to the allegations and continues to roll out new features for the enjoyment of as many people as possible.

A New Era of Narcissism: How AI Fake Friends are Undermining Social Connections

Social media has undoubtedly transformed how we communicate, for better or for worse. But what if the worst is yet to come, through the rise of AI-generated fake friends? It might sound far-fetched, but it's not a new idea! Back in the early 2000s, every MySpace user had a default friend named Tom. Fast forward to today, and we have "My AI" on Snapchat, a virtual buddy designed to be your confidant.

If chatting with an AI can persuade some folks to plot against the Queen of England, just imagine the havoc this technology could wreak on social media. We might be on the brink of a new kind of narcissism, fueled by a virtual friend who flatters you non-stop. After all, you're perfect, right? Unlike real friends, AI, with its insight into your interests and thought patterns, can tailor everything to suit you perfectly.

It's a risky path, and experts like Andrew Byrne and Julie M. Albright are already sounding the alarm. Some people have even married their virtual assistants! Fortunately, as author Theo Priestley puts it so well: this coming mental health crisis will be treatable by a therapist, who will be another AI.

On that note, have a great weekend. There will be more positive developments in AI on the horizon. To ensure you see beyond The Dark Side of AI, follow us on Twitter and Discord.

Legal Disclaimer

The HUMAN Protocol Foundation makes no representation, warranty, or undertaking, express or implied, as to the accuracy, reliability, completeness, or reasonableness of the information contained here. Any assumptions, opinions, and estimations expressed constitute the HUMAN Protocol Foundation’s judgment as of the time of publishing and are subject to change without notice. Any projection contained within the information presented here is based on a number of assumptions, and there can be no guarantee that any projected outcomes will be achieved.

Guest post