Search Box

Thursday, May 26, 2016

Hidden Bias In Algorithms

A New System Can Measure the Hidden Bias in Otherwise Secret Algorithms

A powerful tool for algorithmic transparency

Russell Brandom | May 25, 2016

Researchers at Carnegie Mellon University have developed a new system for detecting bias in otherwise opaque algorithms. In a paper presented today at the IEEE Symposium on Security and Privacy, the researchers laid out a new method for assessing the impact of an algorithm's various input, potentially providing a crucial tool for corporations or governments that want to prove a given algorithm isn't inadvertently discriminatory. 

"The IBM system highlights a particular bias that can creep into algorithms though: Any bias in the data fed into the algorithm gets carried through to the output of the system. ...The implications are fairly straightforward: If Slovenia scores on a controversial play against the U.S., the algorithm might output “The U.S. got robbed” if that’s the predominant response in the English tweets. But presumably that’s not what the Slovenians tweeting about the event think about the play. It’s probably something more like, “Great play — take that U.S.!”" Source:

<more at; related articles and links: (When Discrimination Is Baked Into Algorithms
As more companies and services use data to target individuals, those analytics could inadvertently amplify bias. September 6, 2015) and (Racism is Poisoning Online Ad Delivery, Says Harvard Professor. Google searches involving black-sounding names are more likely to serve up ads suggestive of a criminal record than white-sounding names, says computer scientist. February 4, 2013)>

No comments:

Post a Comment