Google Brain might save lives, but it threatens privacy at the same time. Should we take the bad with the good?

Industries: Healthcare
Trends: Big Data
  • Google Brain uses patient data to predict outcomes and help care providers
  • Was the project a huge success? We’ll soon know
  • DeepMind processed data without consent from any of the patients
  • The NHS broke privacy laws and Google apologises

For hospitals, getting patient care right is always critical. And to better assess a patient’s needs, healthcare providers can really use an accurate sense of likely outcomes. Will she need to be admitted? Will she get better soon? How long will she stay in hospital? Will she return soon for the same problem? In fact, the range of questions that bear on the allocation of scarce resources like computed tomography (CT) scans or the direct attention of physicians is huge.

In a pilot study in the US, Google Brain tested its artificial intelligence (AI) and machine learning on anonymised patient records. Drawing personal data from two hospitals over 11 years, Google Brain wanted to demonstrate that a state-of-the-art system could provide the predictive analytics hospitals need to better care for their patients. Google reports astounding results, but they still need to be confirmed by third parties. Simultaneously, recent projects in the UK involving Google have privacy advocates deeply concerned. Which leads us to the question if the risks of using big data in healthcare are really worth the rewards.

Google Brain uses patient data to predict outcomes and help care providers

According to Dave Gershgorn at Quartz, the data was provided by “two hospitals, the University of California San Francisco Medical Center (from 2012-2016) and the University of Chicago Medicine (2009-2016)”. Christina Farr, reporting for CNBC, says that these medical centres “stripped millions of patient medical records of personally identifying data and shared them with Google’s research team”. The goal was to provide Google Brain with the raw data to run predictive analyses.

The inside of a Google data centre emitting blue light
The goal was to provide Google Brain with the raw data to run predictive analyses.

But assessing this data is a lot more difficult than you might imagine. These records provided nearly 47 billion data points, reports Laurie Beaver for Business Insider. That’s an incredible amount of raw information, far beyond what human beings can handle. So Google Brain is using very advanced machine learning to deal with this massive volume of facts. Its AI relies on cognitive networks that act a lot like your brain, sifting and sorting information before passing provisional conclusions to a higher level for more analysis. Essentially mimicking the human mind, these deep learning systems actually… well… learn. Given the monumental task they were attempting, Google Brain chose to use three such neural networks, really doubling-down on analytical power.

In this case, to sort what matters from what doesn’t, these machine learning systems assessed the data against known outcomes, working out the details that matter in patient records. But this introduced a second layer of complexity. Many of these patient files are hand-written notes, and since each doctor and nurse has different handwriting, and since they can all take notes differently, Google Brain’s AI had to learn to decode script, find which words were important, and link them to important events. But “After analyzing thousands of patients”, Gershgorn writes, “the system identified which words and events associated closest with outcomes, and learned to pay less attention to what it determined to be extraneous data”.

Was the project a huge success? We’ll soon know

In a paper that has yet to be peer-reviewed, Google claims an enormous success. “While the results have not been independently validated, Google claims vast improvements over traditional models used today for predicting medical outcomes. Its biggest claim is the ability to predict patient deaths 24-48 hours before current methods, which could allow time for doctors to administer life-saving procedures.” If this result is real, that’s huge. But don’t start your standing ovation just yet.

If you paused for a second when you read that the University of San Francisco Medical Center and the University of Chicago Medicine shared sensitive patient information, we don’t blame you. This patient data was stripped of personal information, but was it the hospitals’ to share? There are real reasons for concern here, and in the UK, Google and its subsidiary, DeepMind Technologies Ltd., are now embroiled in a controversy centred on this technology and their approach.

DeepMind processed data without consent from any of the patients

In 2015, a deal was struck between Google and the Royal Free London NHS Foundation Trust to develop an app called Streams that would help doctors treat acute kidney injury (AKI), a health problem that kills as many as 40,000 people in the UK alone. But in a peer-reviewed paper, published in the journal Health and Technology, Julia Powles and Hal Hodson note that this “involved the transfer of identifiable patient records across the entire Trust, without explicit consent, for the purpose of developing a clinical alert app for kidney injury”. To understand the scope of the controversy, it’s important to realise that Royal Free is “one of the largest healthcare providers in Britain’s publicly funded National Health Service”, and that this data compromised the records of millions of patients in the UK.

Hand holding a smartphone with the app featuring medical records for a patient named Robert Jones
In 2015, a deal was struck between Google and the Royal Free London NHS Foundation Trust to develop an app called Streams that would help doctors treat acute kidney injury (AKI), a health problem that kills as many as 40,000 people in the UK alone.

And as these researchers caution, the legal restrictions on what Google could do with this data were less than robust: these limitations “appear to have been given little to no legal foundation in Google and DeepMind’s dealings with Royal Free”, they warn. Worse still, patients weren’t required to consent to this transfer. As Powles and Hodson observe, “The data that DeepMind processed under the Royal Free project was transferred to it without obtaining explicit consent from — or even giving any notice to — any of the patients in the dataset… [that] included every patient admission, discharge and transfer within constituent hospitals of Royal Free over a more than five-year period (dating back to 2010).”

You don’t need to be a privacy watchdog to be worried by that, and Powles and Hodson make it abundantly clear that these weren’t innocent ‘mistakes’. Millions of records, with identities intact, were transferred by the NHS to Google. Worse still, given the scope of the project uncovered by freedom of information requests, what Google and the NHS planned to do was quite a bit wider than their public statements indicated. Perhaps most troubling, however, are the unanswered questions. As Powles and Hodson write, “Why DeepMind, an artificial intelligence company wholly owned by data mining and advertising giant Google, was a good choice to build an app that functions primarily as a data-integrating user interface, has never been adequately explained by either DeepMind or Royal Free.”

The NHS broke privacy laws and Google apologises

When this article was made public, neither the NHS nor Google’s DeepMind admitted any wrong-doing. As BBC News reports, “At the time, Google DeepMind said the report had ‘major errors’ that misrepresented the way it and the Royal Free had used data.” Later, Dominic King, DeepMind’s clinical lead on health, and Mustafa Suleyman, its co-founder, repented. “We underestimated the complexity of the NHS and of the rules around patient data”, they said in a statement quoted by the BBC, “as well as the potential fears about a well-known tech company working in health… We got that wrong, and we need to do better”. For its part, the Free Trust indicated that it accepted “the ICO’s findings and have already made good progress to address the areas where they have concerns”. But is that enough to satisfy those concerned about data security and privacy?

DeepMind says that it’s now committed to including patients and the public in the process. And as Suleyman sees it, “This is an amazing opportunity for us to prove what we have always believed: that if we get the ethics, accountability and engagement right, then new technology systems can have an incredible positive social impact.” Perhaps this is a historic moment for both patient care and privacy. Time will tell.

If Google’s success is confirmed, it’s a major step forward for patient care, for sure. And shorter stays in hospital, better treatment, and longer, healthier lives are something we all want. But without robust safeguards in place, advances in AI healthcare may threaten our privacy as they save lives. And although Google’s corporate motto is “Don’t be evil”, trusting business to self-regulate is probably unwise.

Industries: Healthcare
Trends: Big Data
We’re in the midst of a technological revolution and the trends, technologies, and innovations to look out for are all game-changers. They bring competitive advantages, increase the effectiveness of operations, make our daily lives more efficient, improve healthcare, and significantly change the landscape and beyond.

Free trendservice

Receive the latest insights, research material, e-books, white papers and articles from our research team every month, for free!