We use cookies to enhance your browsing experience and analyze our traffic. By clicking "Accept All", you consent to our use of cookies.. View our Privacy Policy for more information.
Your browser (Internet Explorer) is out of date. Please download one of these up-to-date, free and excellent browsers:
For more security speed and comfort.
The download is safe from the vendor's official website.


How HUMAN Protocol is revolutionizing human-machine interaction within AI

Charlie Child
May 6, 2021

How HUMAN Protocol is revolutionizing human-machine interaction within AI

2 min read

Artificial Intelligence is defined by the capacity for machines to make choices, based on an ‘artificial’ understanding of the world. While there are many examples of AI products outperforming regular human intelligence, there are many areas where AI products fail to complete seemingly simple tasks. 

There are many explanations for this. Human-machine communication is constrained, as implied by the centralized structure of legacy data marketplaces. Such centralization results in a systemic limitation of context, from those who ask the questions, to those who supply the answers. It is context, after all, that provides machines with the necessary information to successfully derive meaning to make ‘intelligent’ choices. HUMAN Protocol facilitates greater context in all areas of machine intelligence by allowing a decentralized, bottom-up approach to the trading of data and helping to increase the points of human-machine interaction.

What do we mean by human-machine interaction?

Human-machine interaction describes the relationship of interdependence between humans and machines in solving increasingly complex issues relating to data science and machine learning more generally. 

A simple model of these interactions may look like this: humans (data scientists) create questions to ask other humans (data labelers) via machines (labeling tools) in order to feed the machine learning algorithms that form AI products. 

However, there are many possibilities. A machine learning software could itself create the questions to ask humans. Further yet, a machine could create the question to ask another machine that has learnt how to label data. 

HUMAN Protocol

HUMAN Protocol is changing every possible kind of human-machine interaction. Primarily, it achieves this by supporting decentralized, permissionless systems – which include global labor marketplaces for the labeling of data – that are secure, disintermediated, and automatic. 

Access to data

What HUMAN Protocol offers is not simply access to data – there is plenty of quality data available – but rather access to data labeling services or, in other words, access to new data. Whereas big companies have the resources (and requirement) to run their own data labeling services, most smaller companies do not. For many, their projects are too infrequent or small to justify the huge cost of running such a service.

Inevitably, this limits the kinds of questions being asked in the creation of AI products. Those able to define the question are limited to those who can afford it. If AI products reflect the biases of those who train them, HUMAN Protocol increases the distribution of those who have access to data labeling and, therefore, the people who ask the questions out of which AI products are built. 

An inhibitor to the mass adoption of AI has been the difficulty of gathering new data. Data is usually siloed in large companies, creating a self-propagating interiority in the industry. Barriers to entry have been simply too great. HUMAN Protocol, meanwhile, is permissionless; anyone can launch a job, and determine the scope and cost that aligns with their budget. By increasing availability, HUMAN Protocol allows practitioners to apply new data to many more problems, with less work. 


Automation and decentralization go hand-in-hand. Through HUMAN Protocol, software can ask for human insight directly. Not only does this lead to the potential of a faster and more cost effective interaction, but it creates the paradigm whereby the machine itself, if it can determine what data it needs, can request data from an automated marketplace, where hundreds of millions of workers can respond. A machine-led question launching platform has the potential to catalyse the next generation of AI products. 

Combining data types

HUMAN Protocol does not determine what kind of data can be traded on the marketplace, and has been designed to support multiple applications. It is a protocol – an infrastructure and toolset – and not a marketplace, and is best viewed as a system of rules that facilitate a trustless marketplace by incentivizing decentralized actors to act honestly. What kind of data is traded is up to users. As we cover in an article on our CVAT integration, and demonstrate in our integration documentation, the Protocol is easy to loop new interfaces into. 

By not predetermining the kind of data that is traded, and providing the framework for multiple kinds of data, the Protocol creates a more agile and organic environment for human-machine interactions. Further to this, the Protocol can support factored cognition; the decomposition of a task into different kinds of data, which can be solved separately, and assimilated. The benefit of this is, as touched upon in our article Decentralization: Part One, Metcalfe's law, which states that the value of a network increases exponentially with a linear increase of users. HUMAN Protocol is no different; the more applications on the Protocol, the more possibilities for internal communication and cooperation to solve increasingly complex tasks for AI. 

Increasing the pool

Just as anyone can ask questions in the permissionless system, anyone is free to sign up to answer. HUMAN Protocol has the largest labor pool in the world, with hundreds of millions of responders, across 247 countries and territories. 

If AI is to be further integrated into all parts of society, it needs to understand the world more comprehensively. AI requires huge volumes of data to find more reflective consensus on a thing; not only does HUMAN Protocol give access to relevant datasets at scale, but also to the diversity necessary to ensure those datasets are as reflective as possible.

But, most importantly, by providing open access to a global labor marketplace, HUMAN Protocol accesses a greater quantity of labelers. When it comes to data, quantity is a quality of its own.

Expanding the context

A centralized system is permissioned and limited. If we acknowledge that more data is of immeasurable benefit to AI, then the benefit of a permissionless and open system is self-evident. By not determining who answers, or what questions are asked, or the kinds of data that transact across the network, HUMAN Protocol creates the framework that allows for questions and answers to arrive from a greater populace, and the kinds of data to be more varied, relevant, and flexible. HUMAN decentralizes data science, thereby increasing transparency, competition, and expanding the context out of which question and answer arise. 

We cannot pretend that HUMAN Protocol fixes every problem in the industry. However, it offers a unique solution by distributing all agents in the human-machine interaction, allowing for a more organic and flexible Q&A system for human feedback. For more on why HUMAN feedback, as opposed to machine labeling, is an integral part of data labeling, and how HUMAN can support both human and machine questions and answers, read our article about Neural Networks: Why we need a HUMAN-in-the-loop system

For the latest updates on HUMAN Protocol, follow us on Twitter or join our community Telegram channel.

Legal Disclaimer

The HUMAN Protocol Foundation makes no representation, warranty, or undertaking, express or implied, as to the accuracy, reliability, completeness, or reasonableness of the information contained here. Any assumptions, opinions, and estimations expressed constitute the HUMAN Protocol Foundation’s judgment as of the time of publishing and are subject to change without notice. Any projection contained within the information presented here is based on a number of assumptions, and there can be no guarantee that any projected outcomes will be achieved.

Guest post

Related blog posts


Community Announcement: Rewarding Zealy’s Participants 

Sep 16, 2023

MEXC Joins HUMAN Protocol: Expanding HMT Accessibility for Users

Sep 15, 2023

The Dark Side of AI #8: The Illusion of Open-Source and The Rise of AI Clickbait Scams

Sep 15, 2023

The Bright Side of AI #8: Alibaba's AI Innovations, AI's Role in Job Quality, and the Rise of Arabic AI with Jais

Sep 13, 2023

Newconomics 2023 - Meet our Speakers: James Kirkham

Sep 12, 2023

Newconomics 2023: Challenging the Status Quo

Sep 11, 2023

The Dark Side of AI #7: My AI's Eerie Silence, Centricity's Investment Scam, and Public Fear of AI

Aug 26, 2023

The Bright Side of AI #7: Voice-Activated Payments in India, AI-Powered Recruitment on LinkedIn, and Urban Cleanliness in UAE

Aug 24, 2023

What is the Eliza Effect, or the Art of Falling in Love with an AI?

Aug 21, 2023

The Dark Side of AI #6: Zoom's AI Betrayal, AI's Lethal Cooking Suggestions, and the End of Password Privacy

Aug 19, 2023

The Bright Side of AI #6: Cleaner Skies Ahead and Wellness for Doctors

Aug 16, 2023

‍The Dark Side of AI #5: AI's Threat to Journalism, OpenAI's Inhumane Labor Practices, and Social Media's Fake Friend Phenomenon

Aug 11, 2023

HUMAN Community Newsletter #19

Aug 10, 2023

The Bright Side of AI #5: Alibaba's Open-Source Triumph, AI's Victory in Cancer Detection, and the End of Flight Delays‍

Aug 8, 2023

The Dark Side of AI #4: The Recruitment Revolution, The Fight for Medical Privacy, and AI's Alarming Potential in Biochemical Warfare

Aug 6, 2023

The Bright Side of AI #4: AI Safety Coalition, Transforming Lives in India through Language, Neanderthal Genes Reimagined

Aug 2, 2023