We use cookies to enhance your browsing experience and analyze our traffic. By clicking "Accept All", you consent to our use of cookies.. View our Privacy Policy for more information.
Your browser (Internet Explorer) is out of date. Please download one of these up-to-date, free and excellent browsers:
For more security speed and comfort.
The download is safe from the vendor's official website.

Blog:

HUMAN Protocol and the basics of bias

HUMAN Blog
Fundamentals
HUMAN Protocol
May 27, 2021

HUMAN Protocol and the basics of bias

2 min read

At HUMAN Protocol, we talk a lot about bias. Machine bias is a critical concern to the advancement of ML and AI; high-profile cases of prejudice in AI products have caused concern in the industry, and have already prompted legal authorities to introduce regulation to limit the negative impact of AI biases. It is important for us all to understand bias, and to understand how HUMAN Protocol offers a solution to the problem. While we cannot stop bias, we can limit it. For a more detailed look at bias, we will be soon be releasing a piece delving into the intricacies (and different types) of bias.

For the purposes of basic understanding, we will asses two distinct meanings of bias: 

Human bias is understood as favouritism or prejudice toward a certain subject. It does not matter whether that subject is a person, a group, an idea, or a thing. At the more harmless end, human bias could relate to a preference for a certain soap; at the other end, it can manifest as racism.

Statistical bias is when a statistic over or underestimates a parameter. A parameter is an absolute, for example:

  • An employee asks all coworkers: do you like ice cream? And 90% say yes. This is a parameter, because they asked all coworkers.

A statistic, on the other hand, is based on a sample:

  • The employee asks all coworkers who are in the office on Friday. Two people are sick. Three are out for meetings. They ask the same question, and the result is 70%. 

This is an example of statistical bias. The statistic (70% of coworkers like ice cream) has underestimated the truth as defined by the parameter (90% of coworkers like ice cream).

The problem is easy to solve (if you are aware of it). The solution? Ask all coworkers. 

But what if you are not asking about ice cream, but trying to build an emotion-recognition software with the question: Is this face angry, sad, or surprised? 

The difficulty of absolute representation increases with the amount of people being represented. Unlike the ice cream coworkers, an ML practitioner cannot ask everyone in the world. Even if they could, the parameter would be meaningless to the populations of the world that had an answer different to the majority.

People’s answers to such questions are inherently subjective; their idea of the emotion is dependent on many racial, social, and cultural factors, both learned and innate. So who do you choose to ask? What kind of populations? 

The answers to these questions reveal the root of bias in the field of AI.

Bias in AI

AI products are built using machine learning, which is when machines create their own algorithms based on sample data. The machine never ‘understands’ anything; it operates on predictive patterns from what it has seen already – and herein lies the problem.

AI builds for a better future, but it must do so on data of the past or, at best, of today; at once trying to operate effectively in a world that is inherently subjective, limited, and full of prejudice, while trying to create a more equal future for everyone.

The problem is well documented. 

COMPAS is an ML software used to predict the likelihood of criminal reoffence. It is used as a decision support tool in U.S. courts. 

Propublica investigated the software. Because of the data that was used, and the way it was used, the software predicted that ‘black defendants who did not reoffend over a two-year period were nearly twice as likely to be misclassified as higher risk compared to their white counterparts (45 percent vs. 23 percent)’. 

The problem we are talking about is not that black people were predicted to have higher reoffending rates. The problem is that the statistics ‘misclassified’ the defendant's likelihood of committing another crime.

What kind of bias are we talking about here?

This is where our understanding of the different kinds of bias is helpful. Because although the manifestation of the bias is human/societal in the form of racism, the root is statistical. It is easy for such issues to become emotional, but we must remember the root is in the data. Software, in and of itself, cannot be racist, but an algorithm may be faulty, or rely on insufficiently detailed or diverse data. In this case, the algorithm is not doing what it is supposed to do: to accurately determine rates of reoffending. 

If the machine had better data, it would give more accurate, more representative results. But how do we produce better data? And what do we mean by ‘better’?

The solution

Quantity 

HUMAN Protocol offers access to enormous data labeling capacity. It not only helps produce large quantities of data, but large quantities of detailed, hyper-relevant data. It achieves this by providing a means – a global labor marketplace – for organizations to incentivize the completion of specific data-labeling tasks, creating, in turn, more specific datasets. HUMAN Protocol ensures there are most instances in which humans interact with machines, to the scale of hundreds of millions of interactions, to provide more globally representative datasets and, through them, more accurate machine intelligence. 

When it comes to ML, quantity is a quality of its own. The more data points you have, the more likely edge cases will be reflected as outliers by the broader consensus. 

Diversity 

The pool of responders available on HUMAN Protocol operate in 247 countries and territories. This is essential for providing data scientists with globally representative data, out of which they can create products that understand the world they operate in.

Access

HUMAN Protocol lowers the barriers to entry to data science. For most, the prospect of running a data-labeling service is too expensive. This, typically, has limited the voices of those who create AI products. In practice, it is graduate students, data-entry specialists, and AI architects who establish the parameters for the training of AIs. 

This is not to point the finger at those who train AI products, but merely to recognise that it is perhaps inappropriate to have a limited populace train products for a global context. Really, there is no one person who can create the standard. No one could do a better job. This is a situation in which more people, more perspectives, from more culturally appropriate backgrounds, can help to create products that better understand the world. To create a complete view of the world, all kinds of perspectives are required.

Control

Furthermore, HUMAN Protocol gives those producing the dataset control of the bias. If bias is an unavoidable part of data, it is best to be aware of it. The Protocol allows organizations to target certain groups, so that the bias is in their hands, and part of the context of understanding to create the products with. 

Conclusions

To end bias is implausible. Subjectivity is encoded in us. But we can limit its manifestation in AI by providing those that build AI or ML-powered technologies with access to the vast quantities of diverse data they need to make more representative products.

What we mean when we talk about bias in AI is really poor, insufficient, and unbalanced data that has been used to create products which now reflect a poor, insufficient, and unbalanced worldview. What we call bias is really data error: an unintended consequence of insufficient data. It is somewhat misleading to refer to this as bias, because bias easily implies a societal failing, while the cause – and the area to apply a solution – is the data AI relies upon. The two, while not altogether unassociated, are best to be distinguished. Only then can we understand that data error results from how the data is collected, structured, and used for training. 

Whether it is ice cream or facial recognition technology, statistical bias is simply the consequence of insufficiently detailed datasets. These two examples may seem miles apart in their significance, but the logic and mathematics behind them are the same. 

The narrower the population asked, the more likely you are to have errors in your data.

In the sphere of AI, such errors can be seismic. If AI is here to create the future, let us make sure it does not merely repeat the mistakes of the past, but creates a future that is fairer for everyone, by being more representative of everyone.

For the latest updates on HUMAN Protocol, follow us on Twitter or join our community Telegram channel.

Legal Disclaimer

The HUMAN Protocol Foundation makes no representation, warranty, or undertaking, express or implied, as to the accuracy, reliability, completeness, or reasonableness of the information contained here. Any assumptions, opinions, and estimations expressed constitute the HUMAN Protocol Foundation’s judgment as of the time of publishing and are subject to change without notice. Any projection contained within the information presented here is based on a number of assumptions, and there can be no guarantee that any projected outcomes will be achieved.


Guest post