blog

AI’s Big Black Box

Why Transparency is Key

Artificial Intelligence (AI) is becoming part of our everyday world with far-reaching effects. AI projects are being developed in medical arenas to predict future illnesses in patients, social media platforms use algorithms to predict user behaviour, and some investment management firms like Bridgewater Associates in the US (www.bridgewater.com) have begun to integrate algorithmic decision-making into their business model. As these systems become more prevalent across daily life, transparency around their inner workings becomes key especially if something goes wrong.

The dangers of unpredictable AI often make their appearance in sci-fi narratives where scientists are forced to destroy a machine because it has become so advanced and dangerous that no expert can predict the outcome or control its behaviour. These kinds of doomsday depictions of AI allude to the much-discussed notion of the ‘black box’. Is it ok for developers to know what goes into the system but not what happens inside? And at what point should the programmer (or the technology itself) be able to explain the inner workings of the system if it fails?

This issue of the black box is especially pertinent in ‘deep learning’ approaches of AI where computers essentially program themselves and learn to adapt by looking for patterns amongst data. Unlike traditional coding, the algorithms and neural networks in these systems can become so deep and complex that the reasoning behind them is not very traceable or easily explained. Not only are these neural nets too time-consuming to code by hand, but they are able to adapt, change and generate a complexity that can then defy predictable reasoning.

Ultimately, the black box concern points to issues around accountability and transparency and who is responsible if something goes wrong. This is particularly important in areas where there is an advisory element such as in the case of medical technologies, self-driving cars and stock market trading. In the case of self-driving cars, being able to explain why it drove erratically or ran someone over becomes crucial. So is the case with a stock market algorithm which produces an undesirable outcome.

But this raises further questions: to what extent is the unpredictability of AI different from not understanding what another human is thinking? And, if some AI systems evolve to become instinctual, what are the consequences? All of these questions boil down to an even bigger issue around the need for a global regulatory framework which makes developers and the machines they invent accountable for the outcome.

In 2016, the European Union created new data-protection rules around AI which include a legal right to an explanation of algorithmic decisions. However, grey areas still exist when it comes to defining what counts as an explanation and to what extent it covers any sort of unexpected outcome. Furthermore, whilst these laws are being put in place, there is potential for some AI codes to develop so quickly that the regulatory processes lag behind.

Some developers are also implementing solutions by integrating an accountability step into the design of their machines. Regina Barzilay, a professor at MIT, says that machines and humans must collaborate, and she has programmed a system to explain itself by asking it to produce bits of text that represent a pattern it has discovered (www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/). Integrating a series of explainable steps into the development can enable more accountability, hopefully resulting in a clearer understanding of what goes on inside the system.

While all this sounds great in principle, it leads again to the question of who is responsible for the outcomes of these AI developments especially if something goes wrong. If self-driving cars malfunction and hit a pedestrian, what protocols should be put in place to understand what went wrong and who is accountable? Is it the programmers? The company? The manufacturers? Or, the device itself? And, with so much money being poured into AI expansion, how do we prevent big tech companies from gaining too much power and disregarding the law?

While there are no straightforward answers, what’s clear is that unregulated AI poses risks in terms of lack of safety, privacy (especially when it comes to personal data) and public awareness. Perhaps the best step towards minimising these risks is a combination of an established practice of collaboration between human and machine (as the scientist from MIT has done above), along with standardised regulations which enable AI’s decision-making process to be audited — the implementation of which could partly be driven by public demand and their right to know how and where these AI systems operate.

In the end, if we can begin to dissolve the walls of AI’s big black box, then perhaps we can move ever closer to a safer AI landscape, one that makes space for accountability and transparency around the workings of these complex machines.

More Perspectives