How IBM is leading the fight against black box algorithms

Credit: Magdalena Petrova

IBM is leading the fight against black box algorithms with a new set of open source software to help developers understand how their artificial intelligence is making decisions.

Black box algorithms are the complex code at the heart of systems that people increasingly rely on day to day; from everyday things like the news you read and products you buy, to what stock a hedge fund will invest in, or which clients an insurer will cover.

They are increasingly complex in their design and can often be informed by the bias of the coders, who sometimes aren’t even sure how the system reached its conclusion. There has also historically been little oversight or accountability regarding their design.

Now, with the Fairness 360 Kit, IBM is open sourcing software intended to help AI developers to see inside their creations via a set of dashboards, and dig into why they make decisions.

The software runs as a service on the IBM Cloud and an AI bias detection and mitigation toolkit will be released into the open source community by IBM Research.

It promises real-time insight into algorithmic decision making and detects any suspicion of baked-in bias, even recommending new data parameters which could help mitigate any bias it has detected.

Importantly, the insights are presented in dashboards and natural language, “showing which factors weighted the decision in one direction vs. another, the confidence in the recommendation, and the factors behind that confidence,” the vendor explained in a press release.

“Also, the records of the model’s accuracy, performance and fairness, and the lineage of the AI systems, are easily traced and recalled for customer service, regulatory or compliance reasons – such as GDPR compliance.”

The software has been built using models from a variety of popular machine learning frameworks to aid broad and customisable use, including Watson, Tensorflow, SparkML, AWS SageMaker, and AzureML.

The vendor added: “While other open source resources have focused solely on checking for bias in training data, the IBM AI Fairness 360 toolkit created by IBM Research will help check for and mitigate bias in AI models. It invites the global open source community to work together to advance the science and make it easier to address bias in AI.”

Algorithmic bias

A great deal of credit for awareness regarding algorithmic bias must be handed to Joy Buolamwini, the founder of the Algorithmic Justice League and MIT media lab computer scientist whose Gender Shades project has helped uncover racial bias in facial recognition systems.

Popular books like Cathy O’Neil’s Weapons of Math Destruction and Frank Pasquale’s The Black Box Society: The Secret Algorithms That Control Money and Information have also helped raise awareness of this issue, and it seems like the tech industry is starting to do something about it.

An infamous example of AI bias came to light when investigators journalists at ProPublica in the US reported that COMPAS, an algorithm widely used in the US judicial system to predict the likelihood of reoffending, was racially biased against black defendants.

In the UK, police in Durham have been criticised by civil liberties groups for their use of similar algorithms to predict whether suspects are at risk of committing further crimes.

“Programs need to be thoroughly tested and deployed with rigorous oversight to prevent the existence of prejudice – and AI must never be the sole basis for a decision which affects someone’s human rights,” writes Liberty advocacy and policy officer Hannah Couchman.

Research published this week from Accenture – titled ‘critical mass: managing AI’s unstoppable progress’ – found that 70 per cent of organisations adopting AI conduct ethics training for their developers.

“Organisations have begun addressing concerns and aberrations that AI has been known to cause, such as biased and unfair treatment of people,” said Rumman Chowdhury, responsible AI lead at Accenture Applied Intelligence.

“These are positive steps; however, organisations need to move beyond directional AI ethics codes that are in the spirit of the Hippocratic Oath to ‘do no harm’.”

Ray Eitel-Porter, head of Accenture Applied Intelligence UK, added: “Businesses need to think about how they can turn theory into practice. They can do this through usage and technical guidelines enshrined in a robust governance process that ensures AI is transparent, explainable, and accountable.”

What other vendors are doing


Leave a Reply

Your email address will not be published. Required fields are marked *

WP Twitter Auto Publish Powered By :