Algorithms can help fight COVID-19. But at what cost? - Haaretz.com

This past spring, as billions of people languished at home under lockdown and stared at gloomy graphs, Linda Wang and Alexander Wong, scientists at DarwinAI, a Canadian startup that works in the field of artificial intelligence, took advantage of their enforced break: In collaboration with the University of Waterloo, they helped develop a tool to detect COVID-19 infection by means of X-rays. Using a database of thousands of images of lungs, COVID-Net – as they called the open-access artificial neural network – can detect with 91 percent certainty who is ill with the virus.

In the past, we would undoubtedly have been suspicious of, or at least surprised by, a young company (DarwinAI was established in 2018) with no connection to radiology, having devised such an ambitious tool within mere weeks. But these days, we know it can be done. Networks that draw on an analysis of visual data using a technique known as “deep learning” can, with relative flexibility, adapt themselves to decipher any type of image and provide results that often surpass those obtained by expert radiologists.

“They should stop training radiologists now,” Geoffrey Hinton, one of the fathers of deep learning and a highly opinionated scientist, asserted. “I think that if you work as a radiologist, you are like Wile E. Coyote in the cartoon," he told The New Yorker in 2017. You’re already over the edge of the cliff, but you haven’t yet looked down. There’s no ground underneath.” 

The question of the future of radiology is important, with far-reaching implications. But there’s an even more important issue at hand: Will we ever truly be able to understand how algorithms like COVID-Net work?

Despite the knowledge that has accumulated in the past decade, a period in which the study of machine learning – that is, software built to process large amounts of statistical data and optimize decisions based on it – blossomed, much remains a mystery. Scientists know how to create the neural networks, fine-tune and control them – thanks in large measure to Prof. Hinton – but they still don’t know how exactly the networks arrive at their conclusions. This lacuna stems from their design. Computer programs from earlier generations can be likened to flowcharts: Looking at them, the reasoning that led to a particular decision can be traced relatively easily – in the same way that it’s a simple matter, say, to follow a car’s route in a city from the air.

In contrast, programs based on artificial neural networks, whose creation was inspired by the structure of the brain, are primed to execute their mission without their precise method of operation being set in advance. For example, they “train” themselves to play Go, an East Asian board game, at a superhuman level, or to identify the presence of cats in a backyard, by processing staggering amounts of information. The network will locate subtle correlations between data, link the simulated neurons to one another based on need, and make an educated guess. In many cases the result will be impressive, but it will be difficult to reconstruct the route the machine took to arrive at the result. Welcome to the black box.

If use of algorithms was confined to the realm of ancient board games, say, we could go on using them even without knowing what lies behind each decision that is made. But black box algorithms, which are defined by their lack of transparency, are slowly occupying a place in the most important human arenas.

The Israeli company that has come as close as possible to the sun

In medicine, for example, they are used not only to decipher X-rays but to analyze almost every other kind of medical data; in the future they will be relied upon to make diagnoses autonomously or semi-autonomously. In the legal field, some American states have for some time been using a program called Correctional Offender Management Profiling for Alternative Sanctions, aka COMPAS, which utilizes machine learning methods to assess the likelihood of recidivism among convicted offenders, by comparing information about them with historical information about other criminals. The “grade” these individuals receive is revealed to the court and in some cases becomes a consideration in sentencing or in a bail hearing.

Geoffrey Hinton, one of the fathers of deep learning. MARK BLINCH/Reuters

In general, networks such as Facebook, which are largely responsible for the flow of information in the world, make use of machine-learning tools in almost every step of their operation.

At the same time, defects and biases are also accumulating. A study conducted by scientists at the University of California, Berkeley, and published in the journal Science late last year, reported on a medical algorithm whose assessment was flawed with regard to Black patients as compared to white patients: A Black patient has to be in a more serious condition on average than a white patient for him or her to received the same diagnosis. This results from a blurring of the lines between genuine medical need and historical bias in terms of allocation of resources.

In 2016, the acclaimed investigative journalism organization ProPublica showed that COMPAS had erred seriously in assessing the likelihood of criminal offenders returning to crime – as shown by monitoring the life history of past offenders – and tended to assess Black offenders as far more dangerous than white offenders even if they committed the same crimes.

Moreover, last year researchers at Northeastern University in Boston found that Facebook’s ad-delivery algorithm is race- and gender-biased – not something that advertisers had asked for. For example, women will receive greater exposure to ads seeking nurses, men will get offers to work as woodworkers, and real estate will be marketed more to whites than to non-whites.

Territorial disputes

Indeed, in recent years, Facebook’s algorithms have frequently featured in stories about the problematic nature of social media. Last May, for example, The Wall Street Journal found that most members of extremist political groups on Facebook joined those groups in the wake of an algorithm’s recommendation. Senior officials at the company learned about this only after the fact, but even then chose not to intervene. Extremism and controversy, it turns out, contribute to the thriving of the platform.

To safeguard people from the biases of machine learning, methods have been developed in recent years to reveal the operation of the cogwheels inside the black box. Spearheading this approach – known as Explainable AI, or XAI for short, are research bodies such as Duke University’s Prediction Analysis Lab, headed by Cynthia Rudin, and the U.K.-based Institute for Ethical AI and Machine Learning, under its chief scientist, Alejandro Saucedo. And, as with any buzzword, XAI is appealing to the hearts of entrepreneurs and investors. The vision: a world in which algorithms will no longer operate in the dark.

Naturally, it is not possible to explain to the last detail a mechanism that is sometimes based on billions of tiny considerations. That’s something everyone understands. But even a partial explanation, in broad strokes, could be very helpful. For example, in regard to an algorithm used to detect lung diseases, the system would be able to explain to the user that the diagnosis is based primarily on certain features within the X-ray image itself, without respect to the hospital where it was taken (as the algorithm might use any and all information related to the circumstances in which the X-ray was made, and confuse correlation with causality).

A staff member, wearing a face mask as protection against coronavirus, looks at a robot at the venue for the World Artificial Intelligence Conference (WAIC) in Shanghai, China July 9, 2020. ALY SONG/Reuters

Or, alternately, when a bank decides whether or not to make a loan, the system will make it clear that the ethnic origin of the client was not involved in the process. This will enable a certain degree of transparency and will be of assistance in anticipating biases.

But this approach also has a price – and opponents. Here, too, the most vociferous dissenter is Geoffrey Hinton. In a December 2018 interview in Wired magazine, he said that the idea of obligating AI systems to “explain” how they work is a “complete disaster” and went on to elaborate: “People can’t explain how they work, for most of the things they do. When you hire somebody, the decision is based on all sorts of things you can quantify, and then all sorts of gut feelings. People have no idea how they do that. If you ask them to explain their decision, you are forcing them to make up a story.” Why, then, should we force the machines to emulate this human foible, Hinton asks.

He is not the first to argue that self-explanation has aspects of fictionalizing. Freud made a career out of the disparity that exists between what we think drives us and what, according to his method, actually drives us. Similarly, neuroscience sprang out of the conviction that it is impossible to discover our thought mechanism solely through self-exploration, as many philosophers have tried to do – with limited success – over the centuries. The neural networks work, Hinton asserts, and that’s all that matters. The slipups are marginal. If it ain’t broke, don’t fix it.

However, the ardor with which Hinton defends the black box’s lack of transparency goes beyond reasons of principle. It derives also from his desire to somehow preserve these algorithms within the realms of engineers and mathematicians – and not social scientists or lawmakers. In certain senses, he is their creator and he loves them despite their faults. Unconditional love for such deep-learning algorithms may now be losing popularity – though that never bothered him in any case.

From the 1980s, when the future success of neural networks was still unclear, Hinton was one of the technology’s few advocates. It wasn’t until decades later, when the ideas he promoted began to gain traction, that he became a significant figure, was appointed to a senior position in Google’s AI division, and in 2018 was awarded – along with Yann LeCun and Yoshua Bengio – the Association for Computing Machinery’s Turing Award, the highest honor in the field computer science. Hinton and his fellow prizewinners are referred to as “the godfathers of artificial intelligence.” And when the godfather speaks, others listen – although they don’t always do what he says.

Perhaps, in fact, under the cover of the dazzling technologies and mathematical formulae, this is ultimately an intergenerational battle?

Born in London in 1947, Prof. Hinton – a Cambridge graduate and scion of a family of scientists – represents a conception according to which the purpose of machines is to serve us and they should be measured solely in terms of their performance. As during the Industrial Revolution, it is pointless to occupy ourselves with questions about the meaning of the machine’s workings, as long as it does its job.

Younger scientists, however, see something rather different in these algorithms, not having been present at the creation, as it were. From their point of view, these tools are meant to work alongside us, but not be subservient to us, as was the case in the 19th and 20th centuries. Accordingly, human standards must be applied to them.

Thus, the neural networks can be directed to help us fight epidemics, but not at the price of ethnic, racial or other inequality. AI will identify faces for us, but not at the price of even unintended discrimination. Even Google, which employed Hinton, has grasped which way the wind is blowing and is now offering its XAI to developers. It is possible that the explanation the machine will ultimately provide for its decisions will not be convincing. Maybe it will only be a fairy tale, as Hinton is predicting, a bedtime story. But at least we’ll know that it tried to speak in our language, and from beyond the wall of code, perhaps it will extend a hand to us.

Vancouver Radiologists announces merger - Columbia...
Mobile Radiology Room Market Potential Growth, Sha...

Related Posts

 

Comments

No comments made yet. Be the first to submit a comment
Already Registered? Login Here
Guest
Saturday, 26 September 2020

By accepting you will be accessing a service provided by a third-party external to https://radiologytips.com/