How HUMAN Protocol is revolutionizing human-machine interaction within AI
Artificial Intelligence is defined by the capacity for machines to make choices, based on an ‘artificial’ understanding of the world. While there are many examples of AI products outperforming regular human intelligence, there are many areas where AI products fail to complete seemingly simple tasks.
There are many explanations for this. Human-machine communication is constrained, as implied by the centralized structure of legacy data marketplaces. Such centralization results in a systemic limitation of context, from those who ask the questions, to those who supply the answers. It is context, after all, that provides machines with the necessary information to successfully derive meaning to make ‘intelligent’ choices. HUMAN Protocol facilitates greater context in all areas of machine intelligence by allowing a decentralized, bottom-up approach to the trading of data and helping to increase the points of human-machine interaction.
Human-machine interaction describes the relationship of interdependence between humans and machines in solving increasingly complex issues relating to data science and machine learning more generally.
A simple model of these interactions may look like this: humans (data scientists) create questions to ask other humans (data labelers) via machines (labeling tools) in order to feed the machine learning algorithms that form AI products.
However, there are many possibilities. A machine learning software could itself create the questions to ask humans. Further yet, a machine could create the question to ask another machine that has learnt how to label data.
HUMAN Protocol is changing every possible kind of human-machine interaction. Primarily, it achieves this by supporting decentralized, permissionless systems – which include global labor marketplaces for the labeling of data – that are secure, disintermediated, and automatic.
What HUMAN Protocol offers is not simply access to data – there is plenty of quality data available – but rather access to data labeling services or, in other words, access to new data. Whereas big companies have the resources (and requirement) to run their own data labeling services, most smaller companies do not. For many, their projects are too infrequent or small to justify the huge cost of running such a service.
Inevitably, this limits the kinds of questions being asked in the creation of AI products. Those able to define the question are limited to those who can afford it. If AI products reflect the biases of those who train them, HUMAN Protocol increases the distribution of those who have access to data labeling and, therefore, the people who ask the questions out of which AI products are built.
An inhibitor to the mass adoption of AI has been the difficulty of gathering new data. Data is usually siloed in large companies, creating a self-propagating interiority in the industry. Barriers to entry have been simply too great. HUMAN Protocol, meanwhile, is permissionless; anyone can launch a job, and determine the scope and cost that aligns with their budget. By increasing availability, HUMAN Protocol allows practitioners to apply new data to many more problems, with less work.
Automation and decentralization go hand-in-hand. Through HUMAN Protocol, software can ask for human insight directly. Not only does this lead to the potential of a faster and more cost effective interaction, but it creates the paradigm whereby the machine itself, if it can determine what data it needs, can request data from an automated marketplace, where hundreds of millions of workers can respond. A machine-led question launching platform has the potential to catalyse the next generation of AI products.
HUMAN Protocol does not determine what kind of data can be traded on the marketplace, and has been designed to support multiple applications. It is a protocol – an infrastructure and toolset – and not a marketplace, and is best viewed as a system of rules that facilitate a trustless marketplace by incentivizing decentralized actors to act honestly. What kind of data is traded is up to users. As we cover in an article on our CVAT integration, and demonstrate in our integration documentation, the Protocol is easy to loop new interfaces into.
By not predetermining the kind of data that is traded, and providing the framework for multiple kinds of data, the Protocol creates a more agile and organic environment for human-machine interactions. Further to this, the Protocol can support factored cognition; the decomposition of a task into different kinds of data, which can be solved separately, and assimilated. The benefit of this is, as touched upon in our article Decentralization: Part One, Metcalfe's law, which states that the value of a network increases exponentially with a linear increase of users. HUMAN Protocol is no different; the more applications on the Protocol, the more possibilities for internal communication and cooperation to solve increasingly complex tasks for AI.
Just as anyone can ask questions in the permissionless system, anyone is free to sign up to answer. HUMAN Protocol has the largest labor pool in the world, with hundreds of millions of responders, across 247 countries and territories.
If AI is to be further integrated into all parts of society, it needs to understand the world more comprehensively. AI requires huge volumes of data to find more reflective consensus on a thing; not only does HUMAN Protocol give access to relevant datasets at scale, but also to the diversity necessary to ensure those datasets are as reflective as possible.
But, most importantly, by providing open access to a global labor marketplace, HUMAN Protocol accesses a greater quantity of labelers. When it comes to data, quantity is a quality of its own.
A centralized system is permissioned and limited. If we acknowledge that more data is of immeasurable benefit to AI, then the benefit of a permissionless and open system is self-evident. By not determining who answers, or what questions are asked, or the kinds of data that transact across the network, HUMAN Protocol creates the framework that allows for questions and answers to arrive from a greater populace, and the kinds of data to be more varied, relevant, and flexible. HUMAN decentralizes data science, thereby increasing transparency, competition, and expanding the context out of which question and answer arise.
We cannot pretend that HUMAN Protocol fixes every problem in the industry. However, it offers a unique solution by distributing all agents in the human-machine interaction, allowing for a more organic and flexible Q&A system for human feedback. For more on why HUMAN feedback, as opposed to machine labeling, is an integral part of data labeling, and how HUMAN can support both human and machine questions and answers, read our article about Neural Networks: Why we need a HUMAN-in-the-loop system.
For the latest updates on HUMAN Protocol, follow us on Twitter or join our community Telegram channel.
The HUMAN Protocol Foundation makes no representation, warranty, or undertaking, express or implied, as to the accuracy, reliability, completeness, or reasonableness of the information contained here. Any assumptions, opinions, and estimations expressed constitute the HUMAN Protocol Foundation’s judgment as of the time of publishing and are subject to change without notice. Any projection contained within the information presented here is based on a number of assumptions, and there can be no guarantee that any projected outcomes will be achieved.