How Machine Learning with Built-In Explainability Spells the End of Biased Algorithms

Davide Zilli, client services director at Mind Foundry discusses the importance of built-in ‘explainability’ in AI  and machine learning solutions to ensure full transparency and accountability.

The financial sector, like many industries today, relies on algorithms to make sense of data, conduct large-scale machine learning (ML) analysis and forecast outcomes. These algorithms are both effective for problem-solving and beneficial for augmenting people’s contextual, human expertise within an organisation.

Recently however, AI and ML applications have been in the spotlight for negative reasons, including widely reported debacles such as the Apple Card’s credit-limit gender bias and Amazon’s recruitment tool bias.

Bias can infiltrate the machine learning process as early as the first data upload and review stages. There are hundreds of parameters to take into consideration during data preparation, so it can often be difficult to strike a balance between removing bias and retaining useful data.

Take gender for example. It may be a useful descriptor when applied to identify specific health risks, but using it in many other scenarios risks leading to discrimination. Machine learning models will inevitably exploit any parameters – such as gender – in data sets they have access to, so it is vital for users to understand how a model reached a specific conclusion and whether it was ethically acceptable.

Letting Light into the Black Box

Every organisation naturally wants to avoid discrimination, including that stemming from a lack of understanding. According to PwC, 84% of CEOs believe AI-based decision-making must be explainable in order to be trusted. This means ensuring algorithms are fully transparent in their decisions, as well as easily validated and monitored by a human expert.

Machine learning tools must demonstrate full accountability to evolve beyond unexplainable ‘black box’ solutions. Only by embracing AI and ML solutions with ‘baked-in’ transparency can we take advantage of the humble and honest algorithms that produce unbiased, categorical predictions and consistently provide context, explainability and accuracy insights.

When we say a machine learning solution has explainability built into it, we mean it allows users to trace back and demonstrate the reasoning behind selecting and applying a model to tackle a specific problem, and ultimately justify the outcome.

As a first step, features in the ML tool must enable the inspection of data and provide metrics on model accuracy and health, including the ability to visualise what the model is doing. Key to this is the platform alerting users to potential bias during the preparation stage.

The next step towards full explainability is requiring the ML platform to provide full user visibility, tracking each step through a consistent audit trail. This records how and when data sets have been imported, prepared and manipulated during the data science process. It also helps ensure compliance with national and industry regulations – such as the European Union’s GDPR ‘right to explanation’ clause – and demonstrate transparency to consumers.

Finally, building greater visibility into data preparation and model deployment, we should look towards ML platforms that incorporate testing features, where users can test a new data set and receive best scores of the model performance. This helps identify bias and make changes to the model accordingly.

Don’t Sacrifice Accountability for Agility

With many different model types available, selecting the best model to deploy for a task can be a real challenge. This is where many ML tools fall short – they’re fully automated with no opportunity to review and select the most appropriate model. Deep neural network models, for example, are inherently less transparent than probabilistic methods – resulting in rapid data preparation and ML model deployment but allowing little to no chance of visual inspection for users to identify data and model issues.

An effective ML platform must be able to help identify and advise on resolving possible bias in a model during the preparation stage, and provide support through to creation – where it will visualise what the chosen model is doing and provide accuracy metrics – and then on to deployment, where it will evaluate model certainty and provide alerts when a model requires retraining.

During model deployment, machine learning platforms should also extract extra features from data that are otherwise difficult to identify and help the user interpret what information the data conveys beyond the most obvious insights.

The end goal is to put power directly into the hands of the users, enabling them to actively explore, visualise and manipulate data at each step, rather than simply delegating to an ML tool and risking the introduction of bias.

Removing the complexity of the data science procedure will help users discover and address bias faster – and better understand the expected accuracy and outcomes of deploying a particular model.

Ethical AI Advocates Needed

Creating ML platforms with built-in explainability and enhanced governance is a major first step towards promoting more ethical approaches to machine learning in financial services—but this progress can and must go further. Researchers and solution vendors must act as ML educators to inform users of the dangers of bias in machine learning and help users identify and avoid unethical practices.

Raising awareness in this manner will be vital to establishing trust for AI and ML in sensitive deployments such as financial decision-making, medical diagnoses and criminal sentencing.

Machine learning has a truly transformative potential but tightening regulations mean its ethical conduct will soon be subject to increased scrutiny. It is predicted a majority of the G7 countries will in fact establish dedicated associations to oversee AI and ML design by 2023.

This means prioritising open and unbiased algorithms will be a common objective across the financial sector. Organisations that employ ML solutions with built-in transparency and explainability – and are therefore able to demonstrate a thorough understanding of their decision-making process at every step – will endure scrutiny unscathed and emerge from it more trusted than ever before.

Colin Lambert

Share This

Share on facebook
Facebook
Share on google
Google+
Share on twitter
Twitter
Share on linkedin
LinkedIn
Share on reddit
Reddit

Related Posts in , ,