It came as no surprise.
After writing about the growing challenges and responsibilities facing medical professionals and patients last week, I happened upon two posts about the burnout rates for the professionals who are charged with promoting health and healing in the rest of us.
The first was a PBS Newshour segment about the extent of the problem and, more recently, some possible cures. It cited studies from recent years that doctors commit suicide at twice the rate of the general population, that 86 percent of the nurses at one hospital met the criteria for burnout syndrome, that 22 percent had symptoms for post-traumatic stress disorder, and that the PTSD numbers for critical care nurses were comparable to those of war veterans returning from Afghanistan and Iraq. The reporter described what is happening as “a public health crisis.” In a small ray of hope, providers have also begun to create outlets—like arts programs—so healthcare workers can “process some of the trauma” they are experiencing on a daily basis and begin to recover.
The second post, in Fast Company, discussed the most stressful jobs that are being done by women today, including the category “nurses, psychiatric healthcare provides and home health aides.” It noted that registered nurses are 14-16% more likely to have poor cardiovascular health than the rest of the workforce, “a surprising result” because the job is so physically active and nurses are more knowledgeable about the risk factors for cardiovascular disease than the workforce in general.
Several of you who work in the health care industry wrote to me this week about your experiences at work, which (sadly) mirror these discouraging reports.
The other follow-up development relates to the data that is being gathered from patients by the health care industry. Earlier this week, the Wall Street Journal reported hat Google had struck “a secret deal” with Ascension, one of the nation’s largest hospital systems, to gather and analyze patient data including lab results, doctor diagnoses and hospital records. Called Project Nightingale by Google and “Project Nightmare” by others, the data extraction and analysis “amounts to a complete health history, including patient names and dates of birth.” Having all of our medical information instantly available for analysis in one place is clearly a game changer.
The first alarm bells sounded about Project Nightingale involved the privacy of patient data. (Indeed, the day after its initial report, the Journal reported that the government had launched an investigation into Google’s medical data gathering on the basis of these concerns.) Among the privacy-related questions: will access to a patient’s data be restricted to practitioners who are involved in improving that patient’s outcomes? If this data can be used by others, how will it be used and how is the hospital system ensuring that those uses are consistent with that provider’s privacy policies? The governing statute, the Health Insurance Portability and Accountability Act of 1996, provides only the loosest of restrictions today. (Hospitals can share data with business partners without telling patients as long as the information is used “only to help the covered entity carry out its health care functions.”)
On the positive side, the aggregation of patient data can facilitate more accurate diagnoses and more effective patient treatment.
Google in this case is using the data in part to design new software, underpinned by advanced artificial intelligence and machine learning that zeroes in on individual patients to suggest changes to their care.
More troubling, given Medicine’s continued drift from “profession” to “business,” is how providers can realize more profits from their patients by prescribing more medications, tests and procedures. How can patients distinguish between what they truly need to promote their healing and what is profit-making by the health care provider? As the Journal story also reports:
Ascension, the second-largest health system in the U.S., aims in part to improve patient care. It also hopes to mine data to identify additional tests that could be necessary or other ways in which the system could generate more revenue from patients, documents show.
How will patients be protected from unnecessary interventions and expense, or, unlike today, be enabled by industry reporting about medical outcomes to protect themselves? As I argued last week, the ethical responsibilities for everyone in healthcare–including for patients–are shifting in real time.
Earlier this year, I posted (here, here and here) on a similar Google initiative regarding smart cities. In places like Toronto, the company is helping government to gather and “crunch” data that will allow their cities to operate “smarter” and provide greater benefits for their citizens from the efficiencies that are achieved. As with Project Nightingale, there are privacy concerns that Google is attempting to address. But there are also key differences between this tech giant’s plans for monetizing citizen data in smart cities and its plans for monetizing patient data in the medical system.
In healthcare, your most personal information is being taken and used. This data is far more vital to your personal integrity and survival than information about your local traffic patterns or energy usage.
Moreover, in smart cities there are governments and long-established regulatory bodies that can channel citizen concerns back to government and its tech consultants, like Google. Because these interfaces are largely absent in health care, monitoring and enforcement is up to individual patients or hospital-sponsored patients’ rights committees. In other words, if you (as a patient) aren’t “watching the store,” almost no one will be doing so on your behalf.
To this sort of concern, Google responds both early and often, “Trust us. We’ve got your interests at heart,” but there are many reasons to be skeptical. Another Fast Company article that was posted yesterday documented (with a series of links) some of Google’s recent history mishandling user data.
Google has gotten in trouble with European lawmakers for failing to disclose how it collects data and U.S. regulators for sucking up information on children and then advertising to them. The company has exposed the data of some 52 million users thanks to a bug in its Google+ API, a platform that has been shutdown. Even in the field of health, it has already made missteps. In 2017, the U.K.’s Information Commissioner’s Office found the way patient data was shared between the Royal Free Hospital of London and [Google affiliate] DeepMind for a health project to be unlawful. The app involved…has since controversially been moved under the Google Health umbrella. More recently, a lawsuit accused Google, the University of Chicago Medical Center, and the University of Chicago of gross misconduct in handling patient records.
Much of moving forward here depends on trust.
Will health care providers, that suddenly have the profit-making potential of big data, protect us as patients or see us only as revenue generators?
Can these providers “master the learning curve” quickly enough to prevent sophisticated consultants like Google from exploiting us, or will the fox effectively be running the chicken coop going forward?
What will Google and the other data-gatherers do to recover trust that seems to be damaged almost daily wherever their revenues depend upon selling our data to advertisers and others who want to influence us?
Is Google’s business model simply incompatible with the business model that is evolving today in health care?
As the future of medicine gets debated, we all have a say in the matter.
This post was adapted from my November 17, 2019 newsletter. When you subscribe, a new newsletter/post will be delivered to your inbox every Sunday morning.
Leave a Reply