Complex, efficient data: how HUMAN Protocol offers a solution to ML
AI products are built from vast volumes of data. Not many people realize, however, that the detail and efficiency of data helps to create robust datasets, and reduce the need for computation.
HUMAN Protocol is an open framework which supports the transaction of data across many kinds of data labeling applications. Currently, Intel CVAT, hCaptcha, and INCEpTION operate on the Protocol. While hCaptcha is a fairly simple data labeling tool yielding binary results for users who have limited screenspace, automation, and time, the scope for more complex tools is demonstrated by the other two applications: Intel CVAT and INCEpTION.
For example, CVAT is a semi-automated, video annotation tool allowing for highly granular datasets to be created. While Workers create the detailed data annotations, the Protocol organizes the automated quality check, formulation, and delivery of those detailed datasets, with real time upload so that Requesters can check their results as they come in.
To understand how HUMAN Protocol facilitates quality verification of data labels, it is important to know about ground truths. A ground truth is simply a piece of data which has been labeled to the required standard, and can, therefore, act as an example of good labeling, against which future labeled data can be compared. There is no single ground truth that fits all; instead, Requesters who provide the data to be labeled must also provide a ground truth representing the accuracy threshold they wish to be maintained.
How does HUMAN Protocol use this ground truth? The Recording Oracle is the first to receive answers from the Job Exchange. This oracle is responsible for recording the answers on-chain, and making an initial, but not final, assessment of answer quality. To do so, the Recording Oracle holds the ground truth provided by the Requester to check the answer quality, but note: it only holds a small portion of the ground truth.
This is because the Recording Oracle’s work is then checked by the Reputation Oracle; in order to check the work, the Reputation Oracle must have access to more of the ground truth than the Recording Oracle. The Recording Oracle passes the results of the answers on to the Reputation Oracle, which then provides the final determination of answer quality, the reputation score for the Worker or specific website, and updates to the smart contract to reserve funds for completed work.
In essence, a hierarchy of knowledge is created, which allows for each oracle to check the work of the one below it in the pyramid, and before it in the system.
Because the Protocol supports multiple applications – and the Foundation hopes to bring on even more – there arises the possibility for information sharing between applications. For example, one segment of a CVAT-uploaded image tagged “car” could be sent as one of the nine images on a grid of hCaptcha car images to be labeled; thus, the Protocol allows applications to check on one another, creating a supportive environment that improves every part of the dataset’s quality assurance.
Feedback learning is essential to ML researchers today; more and more learning is done via feedback, as opposed to unsupervised learning. Unsupervised learning would be, for example, a human labeling a cat in a thousand images, with a machine returning a yes or no answer. This kind of learning is outlined above as one use case of HUMAN Protocol. Let us now take a look at feedback learning.
Published in Nature Neuroscience, the McGovern Institute for Brain Research, a group of MIT neuroscientists, found that, “adding feedback circuitry [..] improves the performance of artificial neural network systems used for vision applications.” It is an essential procedure for the articulation and growth of detailed, scalable datasets.
Feedback learning inverts the above example. As opposed to a human creating the label and a machine checking it, a machine creates the label and a human checks it. This is a powerful system that simultaneously improves machine-led labeling systems, while leveraging the inherent power of machine learning to increase the speed and accuracy of data labeling. In this scenario, the human is the point of feedback, often referred to in ML as “human-in-the-loop”.
Importantly, feedback learning also consists of the possibility for ML software to create its own image, not just the label imposed on it.
The possibility of machine-led data labeling provides interesting possibilities for the future of AI. To task a machine with routine data labeling frees human labelers from the mundane, while opening up the time and resources to allow those workers to complete more interesting, important, and human-centric work.
Firstly, this can revolutionize labeling of non-specialized knowledge. Workers can spend less of their time labeling a car, or a dog, because the machine has successfully and comprehensively created those labels, and instead focus on labeling data which specifically relies on human intuition, such as sentiment recognition in text, or emotion recognition in faces. This kind of work can provide a bedrock for AI products of the future.
Secondly, machine led systems also provide the potential to increase the availability and application of specialized knowledge. For example, if a machine can undertake the mundane, such as label spelling errors in a legal document, then the potential for a previously infeasible market occurs. It would be problematic – if not a slow route to progress – to have a lawyer labeling spelling errors, but if a machine can do that part, a lawyer could use their expertise more creatively and effectively, applying their knowledge of the law to label a document more comprehensively, sufficiently, with improvements to the whole contract.
The idea remains the same for all kinds of industries; rather than having a doctor label a hundred images of healthy lungs, a machine can do the “grunt-work,” and save the doctor’s valuable time by providing them with images that require more specialized, expert opinions.
Machines can label mundane data because humans have done it for decades. Equally, we may well get to the stage where this more interesting data is sufficiently labeled (and the machines which can label it are sufficiently trained on that data), whereby this more complex work is passed on to machines – and so humans move on to the next layer of labeling. We are at the beginning of the curve; what lies ahead is the potential for human feedback to be free of the mundane and to apply its intuition to solve greater, more complex, specific tasks to offer data scientists fresh, unprecedented quantities and styles of data on subjects that, previously, were scarce, or simply non-existent.
For the latest updates on HUMAN Protocol, follow us on Twitter or join our Discord. Alternatively, to enquire about integrations, usage, or to learn more about how HUMAN Protocol supports machine-learning technologies, get in contact with the HUMAN team.
The HUMAN Protocol Foundation makes no representation, warranty, or undertaking, express or implied, as to the accuracy, reliability, completeness, or reasonableness of the information contained here. Any assumptions, opinions, and estimations expressed constitute the HUMAN Protocol Foundation’s judgment as of the time of publishing and are subject to change without notice. Any projection contained within the information presented here is based on a number of assumptions, and there can be no guarantee that any projected outcomes will be achieved.