We use cookies to enhance your browsing experience and analyze our traffic. By clicking "Accept All", you consent to our use of cookies.. View our Privacy Policy for more information.
Your browser (Internet Explorer) is out of date. Please download one of these up-to-date, free and excellent browsers:
For more security speed and comfort.
The download is safe from the vendor's official website.


‘Enormous potential—and enormous danger’: about the White House AI meeting

HUMAN Protocol
May 9, 2023

‘Enormous potential—and enormous danger’: about the White House AI meeting

2 min read

The potential of AI to solve some of society's greatest problems is undoubted. The White House statement clarifies these “extraordinary benefits”: “from technology that helps farmers grow food more efficiently and computers that predict storm paths, to algorithms that can identify diseases in patients.”

But this comes with risk. The White House AI meeting was not the end, but the beginning of understanding how to mitigate those risks. While the meeting speaks to regulation in the US, this might be an indication of the kinds of regulation that are implemented across the world.

"We're surprisingly on the same page on what needs to happen." -- Sam Altman (CEO, OpenAI)

What are the risks?

The risks are enormous. Vice President Harris released a statement after the meeting that states:

“AI has the potential to dramatically increase threats to safety and security, infringe civil rights and privacy, and erode public trust and faith in democracy.”

That is why, for years, HUMAN Protocol has publicized the potential dangers of AI. We’ve highlighted the problem of bias at a basic and advanced level, the need to democratize access to data, and our latest series on AI gone wrong (see #1 and #2) to highlight the potential dangers, and solutions provided by HUMAN Protocol.

Regulation, however, is the greatest solution to mitigate the risks of AI.

Who attended the meeting?

Among others, Sam Altman (CEO, OpenAI), Dario Amodei (CEO, Anthropic), Satya Nadella (Chairman and CEO, Microsoft), Sundar Pichai, (CEO, Google and Alphabet). They met with Biden administration staff.

President Biden ‘dropped in’, and said:

“I just came by to say thanks. What you're doing has enormous potential—and enormous danger. I know you understand that. And I hope you can educate us as to what you think is most needed to protect society as well as to the advancement. This is really, really important."

The White House position

A White House statement after the meeting outlines the following as key requirements:

  1. the need for companies to be more transparent with policymakers, the public, and others about their AI systems; 
  2. the importance of being able to evaluate, verify, and validate the safety, security, and efficacy of AI systems; 
  3. and the need to ensure AI systems are secure from malicious actors and attacks.

The meeting followed the publication of a blueprint AI Bill of Rights. This summary gives a good indication of how the White House views regulation. There is also a full handbook – From Principles to Practice – which provides detail on the following subjects:

  • Safe and Effective Systems
  • Algorithmic Discrimination Protections
  • Data Privacy
  • Notice and Explanation
  • Human Alternatives, Consideration, and Fallback

‘On the same page’

Sam Altman told reporters after the meeting, "We're surprisingly on the same page on what needs to happen."

It is a mistake to think that AI companies want progress ahead of safety. Of course, there will be people who wish to misuse AI; or to place profits ahead of safety. 

But it is a mistake to assume that this is the case for all AI creators.

Elon Musk, among thousands of other leading AI experts, signed an open letter to halt the production of AI “more powerful than Chat GPT-4”. That shows responsibility. It shows an appreciation of the risks; and taking action to mitigate them. Long may it continue.

AI regulation is progress.

It sets the framework by which AI actors can create AI that serves the people, rather than damaging them. It sets the framework for useful AI to flourish; AI that can help to solve some of society’s biggest issues.

How it relates to HUMAN Protocol

Whatever regulation comes in, there is no doubt that creators of AI will require incredibly good data, and lots of it. The regulation may or may not relate to a platform such as HUMAN Protocol, which provides the building blocks for AI. Regulation will, undoubtedly, largely determine how those blocks are used.

HUMAN Protocol can play a part in ensuring AI is aligned with regulatory requirements – such as the above requirement for ‘Safe and Effective Systems’. Below is a small sample of the benefits HUMAN Protocol can offer:

  • Open access to the platform – more scientists creating mitigates bias
  • Broader workpools of annotators – mitigates bias
  • Flexible – new tools can be integrated to provide fresh data (see CVAT, and SLEAP from SALK, etc.)

The first meeting after the launch of ChatGPT-4, the White House AI meeting is a promising sign for AI; it shows that influential governments are keen to learn about the risks of AI, and, most importantly, to mitigate them, not in order to stop AI, but to make sure its incredible power is used for good.

For the latest updates on HUMAN Protocol, follow us on Twitter or join our Discord. Alternatively, to enquire about integrations, usage, or to learn more about HUMAN Protocol, get in contact.

Legal Disclaimer

The HUMAN Protocol Foundation makes no representation, warranty, or undertaking, express or implied, as to the accuracy, reliability, completeness, or reasonableness of the information contained here. Any assumptions, opinions, and estimations expressed constitute the HUMAN Protocol Foundation’s judgment as of the time of publishing and are subject to change without notice. Any projection contained within the information presented here is based on a number of assumptions, and there can be no guarantee that any projected outcomes will be achieved.

Guest post