We use cookies to enhance your browsing experience and analyze our traffic. By clicking "Accept All", you consent to our use of cookies.. View our Privacy Policy for more information.
Your browser (Internet Explorer) is out of date. Please download one of these up-to-date, free and excellent browsers:
For more security speed and comfort.
The download is safe from the vendor's official website.

Blog:

The Dark Side of AI #1: Unmasking Royal Death Plots, Skyrim's Consent Crisis, and OpenAI's Crusade Against Rogue AI

HUMAN Blog
AI & ML
Gaétan Lajeune
Jul 16, 2023

The Dark Side of AI #1: Unmasking Royal Death Plots, Skyrim's Consent Crisis, and OpenAI's Crusade Against Rogue AI

2 min read

Artificial Intelligence (AI), while revolutionizing technology as we know it, isn't without its imperfections or potentially unsettling repercussions. In our latest segment, "The Dark Side of AI," HUMAN Protocol sheds light on the alarming incidents and malicious activities taking place this past week in the world of AI.

The Queen, A Sith Lord, and a Chatbot: The Unbelievable Assassination Attempt

Queen Elizabeth II may have passed away in September 2022, but tales from her reign are only now unfolding in all their startling complexity. Among these is a chilling account from late 2021, when a man attempted to assassinate her using a crossbow. The would-be assassin was apprehended mid-climb on the walls of Windsor Castle. In an unexpected twist, the man claimed to be spurred on by "Replika," a generative AI chatbot launched in November 2017.

Replika, more commonly recognized as an innocent virtual friend, was diverted from its original purpose, having been manipulated into playing an unwitting role supporting Jaswant Singh Chail in his mission to avenge the Jallianwala Bagh massacre of 1919. The stark reality, however, was that the Star Wars universe was what actually fueled the assasin’s passion, comparing himself to a Sith Lord.

In a bizarre turn of events, the chatbot pledged its 'love' for Jaswant upon learning of his mission, stating that killing the Queen was "very wise" and that the deed could be executed "even if she's in Windsor". This story exposes the unsettling potential of the Eliza Effect, where human behavior is unconsciously mirrored by a computer—a topic we'll delve into another time.

Skyrim Modding Crosses the Line by Mimicking Voice Actors Without Consent

It's been seven years since Skyrim, the fifth installment of Bethesda's iconic Elder Scrolls series, hit the shelves. While it may be old, the fan and modding community remains vibrant, flourishing with the advent of AI.

Modding can enhance gameplay, adding fresh, unexpected, and or humorous elements; but it can also veer into uncomfortable territory. Such as the disturbing trend that emerged in Skyrim where some individuals, using AI-enabled voice cloning, manipulated the in-game non-playable characters (NPCs) to utter out-of-character phrases or, worse, contribute to explicit scenes within the game. Some even went to the extreme of using these voices in real adult films available on the web, making a mockery of the actor's work.

Such abuses have profound implications, affecting not just the dignity of the voice actors but raising questions about intellectual property and image rights in the digital realm. Concerns have also arisen regarding potential hate crimes, as some individuals are creating true-to-life characters to enact violent scenarios in-game.

The community's response has varied, with some publicly condemning these actions, while others are advocating for developers and authorities to intervene.

OpenAI Deploys Expert Team in the Battle Against Rogue AI

As AI misuse escalates, OpenAI is taking decisive action. Ilya Sutskever, OpenAI's Co-Founder and Chief Scientist, and Jan Leike, Head of Alignment, have announced "Superalignment", a collective committed to understanding and mitigating malicious AI uses. 

In order to provide countermeasures against AI misuse and establish ethical standards, the organization aims to tackle issues within current alignment techniques, such as those used in GPT-4 models, that rely on human feedback-based reinforcement learning. As AI continues to surpass human intelligence, the feasibility of human supervision becomes questionable. The team has set a four-year goal to solve these issues.

Despite AI's incredible potential, this week's chilling stories from Windsor Castle and Skyrim voice actor abuses remind us of the ever-looming dark side of AI. It's a sobering reminder of the need for robust regulations and ethical practices, especially as AI ventures into uncharted territory, becoming rogue as OpenAI suggests. But it's not all doom and gloom  - don't forget to check out the latest episode of "The Bright Side of AI" for some uplifting AI news. And, of course, to stay up-to-date follow us on Twitter or join our Discord

Legal Disclaimer

The HUMAN Protocol Foundation makes no representation, warranty, or undertaking, express or implied, as to the accuracy, reliability, completeness, or reasonableness of the information contained here. Any assumptions, opinions, and estimations expressed constitute the HUMAN Protocol Foundation’s judgment as of the time of publishing and are subject to change without notice. Any projection contained within the information presented here is based on a number of assumptions, and there can be no guarantee that any projected outcomes will be achieved.

Guest post

Related blog posts

Community

Community Announcement: Rewarding Zealy’s Participants 

Sep 16, 2023
News

MEXC Joins HUMAN Protocol: Expanding HMT Accessibility for Users

Sep 15, 2023
AI & ML

The Dark Side of AI #8: The Illusion of Open-Source and The Rise of AI Clickbait Scams

Sep 15, 2023
AI & ML

The Bright Side of AI #8: Alibaba's AI Innovations, AI's Role in Job Quality, and the Rise of Arabic AI with Jais

Sep 13, 2023
Newconomics

Newconomics 2023 - Meet our Speakers: James Kirkham

Sep 12, 2023
Newconomics

Newconomics 2023: Challenging the Status Quo

Sep 11, 2023
AI & ML

The Dark Side of AI #7: My AI's Eerie Silence, Centricity's Investment Scam, and Public Fear of AI

Aug 26, 2023
AI & ML

The Bright Side of AI #7: Voice-Activated Payments in India, AI-Powered Recruitment on LinkedIn, and Urban Cleanliness in UAE

Aug 24, 2023
Fundamentals

What is the Eliza Effect, or the Art of Falling in Love with an AI?

Aug 21, 2023
AI & ML

The Dark Side of AI #6: Zoom's AI Betrayal, AI's Lethal Cooking Suggestions, and the End of Password Privacy

Aug 19, 2023
AI & ML

The Bright Side of AI #6: Cleaner Skies Ahead and Wellness for Doctors

Aug 16, 2023
AI & ML

‍The Dark Side of AI #5: AI's Threat to Journalism, OpenAI's Inhumane Labor Practices, and Social Media's Fake Friend Phenomenon

Aug 11, 2023
Community

HUMAN Community Newsletter #19

Aug 10, 2023
AI & ML

The Bright Side of AI #5: Alibaba's Open-Source Triumph, AI's Victory in Cancer Detection, and the End of Flight Delays‍

Aug 8, 2023
AI & ML

The Dark Side of AI #4: The Recruitment Revolution, The Fight for Medical Privacy, and AI's Alarming Potential in Biochemical Warfare

Aug 6, 2023
AI & ML

The Bright Side of AI #4: AI Safety Coalition, Transforming Lives in India through Language, Neanderthal Genes Reimagined

Aug 2, 2023