David Griesing | Work Life Reward Author | Philadelphia

  • Blog
  • About
    • Biography
    • Teaching and Training
  • Book
    • WorkLifeReward
  • Subscribe to my Newsletter
  • Contact
You are here: Home / Blog

Good Work Needs a Cup That’s At Least Half Full

January 12, 2020 By David Griesing Leave a Comment

A counter-narrative I kept hearing before the New Year was that everything’s “so much better” than I think it is.

I’m fairly certain that’s not true, but my cup is still ” half full” (and probably a little more than that) as I start another year. 

Since at least the Great Recession began in 2008, or maybe back to 9/11 and Hurricane Katrina, I’ve been see-sawing between pessimism and optimism. On the downside have been fewer haves, more have-nots (and less opportunity in the country I grew up in) as well as the political and environmental challenges to the ways that we live and work. On the upside, these have also been years of extraordinary innovation and I’ve gotten corresponding lifts from what our new smartphones, social networks, access to the world’s information, and gathering of “big” data (to explain just about everything) have promised.  

As some promises were broken, others were kept. Ten years ago, it would have been cheaper to build new coal- or natural gas-based power generation than wind or solar, but not today. (It’s capitalism’s cost efficiencies not just government regulations that are closing coal-fired power plants.) At the same time, more and better data tells us that people around the world are healthier, living longer and that more are escaping dehumanizing poverty. This forward movement only seems to stall when the costs to the environment of more human development are factored in–so I’m back on the see saw again.

Since any good lawyer can marshal facts to prove his case for either pessimism or optimism, why am I so certain that I’m in positive territory? One part of my answer is self-serving, another part is based on experience, and still another from trying to register data-points beyond the next alarming headline. 

For much of the past 10 years—and for several stretches before that—I’ve been my own boss, which means (among other things) that I have to create my own momentum every day. Because a pessimistic outlook kills my drive, I look for the good news even when I’m overwhelmed by the bad, and usually can find it lurking in plain-sight: steady instead of frantic, modest instead of boasting, less newsworthy but hardly non-existent. 

This is more than a mind-game to get me to work every day. Optimism usually has the edge because the good news drowns out enough of the bad to settle me down somewhere above the tipping point.

I’m also helped by my recent experiences.

For example, I visited Baltimore just before Christmas. My home base of Philadelphia is the poorest of America’s 10 largest cities, but even with some of the sad neighborhoods that splay out around me, Baltimore came as a shock. 

Because it’s always a quick read on people and place when you take public transportation, we dove right into the buses and trains after we arrived, trying to get around for a wedding Emily had to go to and to some side destinations that we had in mind together.  I always get lost in a new city’s transit system, quickly needing “the kindness of strangers” to find my way, and this weekend we discovered some of what Baltimore is like today beyond its first impressions.

Strangers asking locals for help on the street are always vulnerable. But the distance between you is reduced by your need, as well as when your eyes meet, when a local’s mastery of bus color or route is demonstrated, and when you express your gratitude for the help you’ve been given. “Strangeness” shrinks further as such encounters multiply. This place that’s home to them but new to you becomes more familiar as you’re invited in by their hospitality.

What appeared to be the extreme poverty of Baltimore’s public transportation riders was completely forgotten in the generosity that these men and women kept on demonstrating as we learned our ways around on those cold, damp and gray December days. Among other places, their aid got us to the City’s art museum and to vivid paintings by Matisse that most may never have seen. But somehow “the closeness of home” in Matisse’s colors and forms were perfect embodiments of the hospitality that we’d gotten as we made our way in their direction. 

A couple of days in Baltimore reminded me that a bigger story in America than any news story continues to be about the decency and generosity of its people, and how easy it still is to be welcomed into a stranger’s home. 

My cup is more filled than empty for another reason too. As Matt Ridley echoed in an essay a couple of weeks ago: “good news is no news” at all, particularly when one’s fight-or-flight instincts are preoccupied by the next uptick in the threat level. Fill in the blanks with every calamity that’s worrying you most today and Ridley falls back on his data to counter your sense of impending doom:

How can I possibly say that things are getting better, given all that? The answer is: because bad things happen while the world still gets better.

He doesn’t mean that there aren’t storm clouds, even some existential ones. Only, I think, that there are more reserves to weather them—and more forward momentum—than we’re able to recognize when our fields of vision are obscured by our fears.
 
For example, for those who argue (like me, sometimes) that we’re just “using the world up” and leaving nothing for future generations, Ridley refuses to let us lose sight of either our gains or our possibilities.  He argues that we are also producing more economic growth today with fewer of the world’s resources than ever before, that is, with less water, less metal, less land, less of almost everything we once consumed. The situation in Britain (where Ridley is based) and in other “developed countries” does not reflect what’s happening everywhere else, but it’s not irrelevant either.

  • The quantity of all resources consumed per person in Britain (domestic extraction of biomass, metals, minerals and fossil fuels, plus imports minus exports) fell by a third between 2000 and 2017… That’s a faster decline than the increase in the number of people, so it means fewer resources are being consumed overall;  
  • Britain is using 10% less energy today than it was in 1970, even though its population is 20% larger;
  • In the past twenty years, room size computers have been replaced by smartphones, with formerly standalone calendars, flashlights, maps, radios, CD players, watches and newspapers thrown in for good measure;
  • Widely used LED light bulbs consume a quarter of the electricity as incandescent bulbs for the same light; and
  •  The productivity of agriculture is rising so fast that human needs can be supplied with a shrinking amount of land (although, I’d add, the environmental costs of fertilizers, genetically modified seeds and pesticides must be factored in as well).

So my take on Ridley’s data-fueled sunshine is this: Yes, too many of us are still wallowing in consumption and heedless of the consequences, but there are also templates and practices that we’ve already put in place and can build upon—enough human ingenuity and positive momentum—that we’re not running on empty into the future, but instead have a tank that’s maybe, hopefully, a little more than half full.  
 
Enough for cautious optimism. 
 
Enough to preserve our impetus to act on the sense of urgency that remains.
 
Ridley’s argument builds on the data-driven encouragements of his 2010 book The Rational Optimist (you can read a review of it here) as well as on more recent findings by MIT research scientist Andrew McAfee in his excellent, recent More from Less: The Surprising Story of How We Learned to Prosper Using Fewer Resources—and What Happens Next (2019).
 
(If you’re interested in delving deeper, here’s a link with an overview of McAfee’s More from Less argument that “capitalism, tech progress, public awareness, and responsive government [these last two aimed at halting environmental degradation in particular] are the four horsemen of the optimist.” Because I thoroughly enjoyed McAfee’s storytelling in a recent interview on Innovation Hub, you might appreciate his live take on our future possibilities too.)
 
Since the new work-year really begins tomorrow, I wanted to make one more year argument for what can be accomplished in our jobs as tech users, citizens and custodians of a fragile planet as long as we have enough hope.

It’s not just a theoretical hope that we’ll need, but one that’s confirmed whenever you ground your aims in other people at, say, a Baltimore bus stop. It’s when you have “the full-body experience” of dissenting while trying “to raise the consciousness level” of everyone who’s watching or listening, as I argued here last week in Finding a Better Home Through Action. 
 
They are ways to be at home (alone with your work) and not “let the world turn in on you,” just as there are ways to be at home with the life force of others, either where you live or in a strange city,. 
 
It’s inhabiting the jobs you are trying to do by finding “just enough hope.”

This post was adapted from my January 5, 2020 newsletter. When you subscribe, a new newsletter/post will be delivered to your inbox every Sunday morning.

Filed Under: *All Posts, Being Part of Something Bigger than Yourself, Daily Preparation Tagged With: Andrew McAfee, drive, hope, Matt Ridley, motivation, optimism

Finding the Will to Protect Our Humanity

December 16, 2019 By David Griesing Leave a Comment

I want to share with you a short, jarring essay I read in the New York Times this week, but first a little background. 
 
For some time now, I’ve been worrying about how tech companies (and the technologies they employ) harm us when they exploit the very qualities that make us human, like our curiosity and pleasure-seeking. Of course, one of the outrages here is that companies like Google and Facebook are also monetizing our data while they’re addicting us to their platforms. But it’s the addiction-end of this unfortunate deal (and not the property we’re giving away) that bothers me most, because it cuts so close to the bone. When they exploit us, these companies are reducing our autonomy–or the freedom to act that each of us embodies. 
 
Today, it’s advertising dollars from our clicking on their ads, but  tomorrow, it’s mind-control or distraction addiction: the alternate (and equally terrible) futures that George Orwell and Aldous Huxley were worried about 80 years ago in the cartoon essay I shared with you a couple of weeks ago.
 
In “These Tech Platforms Threaten Our Freedom,” a post from exactly a year ago, I tried to argue that the price for exchanging our personal data for “free” search engines, social networks and home deliveries is giving up more and more control over our thoughts and willpower. Instead of responding “mindlessly” to tech company come-ons, we could pause, close our eyes, and re-think our knee-jerk reactions before clicking, scrolling, buying and losing track of what we should really want. 
 
But is this mind-check even close to enough?
 
After considering the addictive properties of on-line games (particularly for adolescent boys) in a post last March, the reply was a pretty emphatic “No!”  Games like Fortnite are using the behavioral information they syphon from young players to reduce their ability to exit the game and start eating, sleeping, doing homework, going outside or interacting (live and in person) with friends and family.
 
But until this week, I never thought that maybe our human brains aren’t wired to resist the distracting, addicting and autonomy-sapping power of these technologies. 
 
Maybe we’re at the tipping point where our “fight or flight” instincts are finally over-matched.
 
Maybe we are already inhabiting Orwell’s and Huxley’s science fiction. 
 
(Like with global warming, I guess I still believed that there was time for us to avoid technology’s harshest consequences.)
 
When I read Tristan Harris’s essay “Our Brains Are No Match for Our Technology” this week, I wanted to know the science, instead of the science fiction, behind its title. But Harris begins with more of a conclusion than a proof, quoting one of the late 20th Century’s most creative minds, Edward O. Wilson. When asked a decade ago whether the human race would be able to overcome the crises that will confront us over the next hundred years, Wilson said:

Yes, if we are honest and smart. [But] the real problem of humanity is [that] we have Paleolithic emotions, medieval institutions and godlike technology.

Somehow, we have to find a way to reduce this three-part dissonance, Harris argues. But in the meantime, we need to acknowledge that “the natural capacities of our brains are being overwhelmed” by technologies like smartphones and social networks.

Even if we could solve the data privacy problem, humanity will still be reduced to distraction by encouraging our self-centered pleasures and stoking our fears. Echoing Huxley in Brave New World, Harris argues that “[o]ur addiction to social validation and bursts of ‘likes’ would continue to destroy our attention spans.” Echoing Orwell in Animal Farm, Harris is equally convinced that “[c]ontent algorithms would continue to drive us down rabbit holes toward extremism and conspiracy theories.” 

While technology’s distractions reduce our ability to act as autonomous beings, its impact on our primitive brains also “compromises our ability to take collective action” with others.

[O]ur Paleolithic brains aren’t build for omniscient awareness of the world’s suffering. Our online news feeds aggregate all the world’s pain and cruelty, dragging our brains into a kind of learned helplessness. Technology that provides us with near complete knowledge without a commensurate level of agency isn’t humane….Simply put, technology has outmatched our brains, diminishing our capacity to address the world’s most pressing challenges….The attention [or distraction] economy has turned us into a civilization maladapted for its own survival.

Harris argues that we’re overwhelmed by 24/7 genocide, oppression, environmental catastrophe and political chaos; we feel “helpless” in the face of the over-load; and our technology leaves us high-and-dry instead of providing us with the means (or the “agency”) to feel that we could ever make a difference. 
 
Harris’s essay describes technology’s assault on our autonomy—on our free will to act—but he never describes or provides scientific support for why our brain wiring is unable to resist that assault in the first place. It left me wondering: are all humans susceptible to distraction and manipulation from online technologies or just some of us, to some extent, some of the time? 
 
Harris heads an organization called the Center for Humane Tech, but its website (“Our mission is to reverse human downgrading by realigning technology with our humanity”) only scratches the surface of that question. 
 
For example, it links to a University of Chicago study involving the distraction that’s caused by smartphones we carry with us, even when they’re turned off. These particular researchers theorized that having these devices nearby “can reduce cognitive capacity by taxing the attentional resources that reside at the core of both working memory and fluid intelligence.”  In other words, we’re so preoccupied when our smartphones are around that our brain’s ability to process information is reduced. 
 
I couldn’t find additional research on the site, but I’m certain there was a broad body of knowledge fueling Edward O. Wilson’s concern, ten years ago, about the misalignment of our emotions, institutions and technology. It’s the state of today’s knowledge that could justify Harris’s alarm about what is happening when “our Paleolithic brains” confront “our godlike technologies,” and I’m sure he’s familiar with these findings.  But that research needs to be mustered and conclusions drawn from it so we can understand, as an impacted community, the risks that “our brains” actually face, and then determine together how to protect ourselves from it. 

To enable us to reach this capable place, science needs to rally (as it did in an open letter about artificial intelligence and has been doing on a daily basis to confront global warming) and make its best case about technology’s assault on human autonomy. 
 
If our civilization is truly “maladapted to its own survival,” we need to find our “agency” now before any more of it is lost. But we can only move beyond resignation when our sense of urgency arises from a well-understood (and much chewed-upon) base of knowledge. 

This post was adapted from my December 15, 2019 newsletter. When you subscribe, a new newsletter/post will be delivered to your inbox every Sunday morning.

Filed Under: *All Posts, Being Part of Something Bigger than Yourself, Daily Preparation Tagged With: agency, Aldous Huxley, autonomy, distraction, free will, George Orwell, human tech, humane technology, instincts, on-line addiction, technology, Tristan Harris

The Good Work of Getting What We Need As Patients

December 2, 2019 By David Griesing Leave a Comment

Since recent posts here and here about work in healthcare—discussing burnout among health professionals, concerns about misuse of patient data, and questions about who is policing our rapidly changing health system—I’ve continued to follow developments in the field.  
 
Over the past few weeks, some of you have also shared your troubled reactions about how work in healthcare has been evolving.
 
The net of these developments is that while there are grounds for alarm about the uses of our health data, its proliferation presents some extraordinary opportunities too. Concepts like “precision medicine” become more realistic as the amount and quality of the data improves. More and better data will also help us to live longer and healthier lives. On the other hand, whether AI and other data-related technologies can enable us to improve the quality of the work experience for millions of healthcare professionals is a stubbornly open question.
 
In this last healthcare-related post for a while, there are two, practical rules of thumb that might give us greater sense of control over how our healthcare data is being used, as well as a couple of ways in which more and better health-related information is already producing better patient outcomes.
 
The good work of getting the healthcare that we need as patients (both for ourselves and for others that we’re caring for) requires healthy doses of optimism as well as pessimism, together with understanding as much as we can about when excitement or alarm are warranted.

Two Rules of Thumb To Inhibit Misuse of Our Medical Data

The first rule of thumb involves insisting upon as much transparency as possible around the uses of our medical information. That includes knowing who is using it (beyond the healthcare provider) and minimizing the risks of anyone’s misuse of it.

Unfortunately, more of this burden falls on patients today. As health systems increasingly look to their bottom lines, they may be less incentivized to protect our personal data streams. And even when our interests are aligned, doctors and hospitals may not be able to protect our data adequately. As I wondered here a couple of weeks ago: “Can these providers ‘master the learning curve’ [of big data-related technologies] quickly enough to prevent sophisticated consultants like Google from exploiting us, or will the fox effectively be running the chicken coop going forward?”

An article last weekend in the Wall Street Journal called “Your Medical Data Isn’t As Safe As You Think It Is” raised a couple of additional issues. As patients, we may be lulled into complacency by the fact that much of our data is rendered “anonymous” (or stripped of our personal identifiers) before it is shared in big databases. But as this article describes at length, “de-identified” data in the hands of one of the tech companies can easily be “triangulated” with other data they already have on you to track your medical information back to you. That means they remain able to target you personally in ways you can imagine and some you cannot.

Moreover, even if it remains anonymous, your medical data “in a stranger’s hands” may still come back to haunt you. As one expert in data sharing observed, companies that monetize personal data currently provide very little information about their operations. That means we know some of the risks to us but are in the dark about others. Of the known risks around data dispersal, you may suddenly find yourself paying higher health-related insurance premiums or barred from obtaining any coverage at all:

Google will be in a good position to start selling actuarial tables to insurance companies—like predictions on when a white male in his 40s with certain characteristics might be likely to get sick and expensive. When it comes to life and disability insurance, antidiscrimination laws are weak, he says. ‘That’s what creates the risk of having one entity having a really godlike view of you as a person that can use it against you in ways you wouldn’t even know.’

Our first rule of thumb as customers in the health system is to insist upon transparency around how our providers are sharing our medical information, along with the right to prevent it from being shared if we are concerned about how it is will be used or who will be using it.
 
The second rule of thumb has always existed in healthcare, but may be more important now than ever. You should always be asking: is my medical information going to be used in a way that’s good for me?  If it’s being used solely to maximize Google’s revenues, the answer is clearly “No.” But if your information is headed for a health researcher’s big data set, you should ask some additional questions: “Was someone like me considered as the study was being constructed so the study’s results are likely to be relevant to me?”  “Will I be updated on the findings so my ongoing treatment can benefit from them?” (More questions about informed consent before sharing your medical data were set forth in another article this past week.) 

Of course, understanding “the benefits to you beforehand” can also help you determine whether a test, drug or treatment program is really necessary, that is, if it’s possible to assess the pros and cons with your doctor in the limited time that you have before he or she orders it.
 
With medical practitioners becoming profit (or loss) centers for health systems that operate more like businesses, the good work of protecting yourself and your loved ones from misuse of your data requires both attention and vigilance at a time when you’re likely to be pre-occupied by a range of other issues.

More and Better Data Is a Cause for Excitement Too

There is an outfit called Singularity University that holds an annual conference each year with speakers who discuss recent innovations in a range of fields. Its staff also posts weekly about the most exciting developments in technology on a platform called Singularity Hub. One of its recent posts and one of the speakers at its conference in September highlight why more and better medical data is also a cause for excitement.
 
To understand the promise of today’s medical data gathering, it helps to recall what medical information looked like until very recently. Most patient information stayed in medical offices and was never shared with anyone. When groups of patients were studied, the research results varied widely in quality and were not always reconciled with similar patient studies. Medicine advanced through peer reviewed papers and debates over relatively small datasets in scholarly journals. Big data is upending that system today.
 
For us as patients, the most exciting development is that more high quality data will give us greater control over our own health and longevity. This plays out in (at least) two ways.
 
In the first instance, big data will give each of us “better baselines” than we have today about our current health and future prospects. According to the Singularity Hub post, companies as well as government agencies are already involved in large-scale projects to:

measure baseline physiological factors from thousands of people of different ages, races, genders, and socio-economic backgrounds. The goal is to slowly build a database that paints a comprehensive picture of what a healthy person looks like for a given demographic…These baselines can then be used to develop more personalized treatments, based on a particular patient.

Although it sounds like science fiction, the goal is essentially “to build a digital twin of every patient,” using it in simulations to optimize diagnoses, prevention and treatments. It is one way in which we will have personalized treatment plans that are grounded in far more accurate baseline information than has ever been available before.
 
The second breakthrough will involve changes in what we measure, moving organized medicine from treatment of our illnesses to avoidance of most illnesses altogether and the greater longevity that comes with improved health. As these developments play out, it could become commonplace for more of us to live well beyond a hundred years.
 
At Singularity University’s conference two months ago, Dr. David Karow spoke about the data we should be collecting today to treat a broad spectrum of medical problems in their early stages and increase our life expectancy. He argues that his start-up, Human Longevity Inc., has a role to play in that future.
 
Four years ago, Karow conducted a trial involving nearly 1,200 presumably healthy individuals. In the course of giving them comprehensive medical checkups, he utilized several cutting edge diagnostic technologies. These included whole-genome and microbiome sequencing, various biochemical measurements and advanced imaging. By analyzing the data, his team found a surprisingly large number of early stage tumors, brain aneurysims, and heart disease that could be treated before they produced any lasting consequences. In another 14% of the trial participants, significant, previously undetected conditions that required immediate treatment were discovered. 
 
Karow’s argument is that we’re “not measuring what matters” today and that we should be “hacking longevity” with more pre-sympomatic diagnoses. For example, if testing indicates that you have the risk factors for developing dementia, you can minimize at least some of those risks now “because of third of the risks are modifiable.” 
 
Every start up company needs its evangelists and Karow is selling “a fountain of youth” that “starts with a fountain of data.”  This kind of personal data gathering is expensive today and not widely available but it gestures towards a future where these sorts of “deep testing” may be far more affordable and commonplace. 
 
We need these promises of more personalized and preventative medicine—the hope of a better future—to have the stamina to confront the current risks of our medical data being monetized and misused long before we ever get there. As with so many other things, we need to hold optimism in one hand, pessimism in the other, and the ability to shuttle between them.

This post was adapted from my December 1, 2019 newsletter. When you subscribe, a new newsletter/post will be delivered to your inbox every Sunday morning.

Filed Under: *All Posts, Being Part of Something Bigger than Yourself, Building Your Values into Your Work, Continuous Learning Tagged With: baseline health measures, big data, control of your data, data, ethics, health and longevity, health care industry, healthcare, misuse of patient data, pre-symptomatic diagnoses, work, work of being a patient

An Unhappy Healthcare System

November 19, 2019 By David Griesing Leave a Comment

It came as no surprise. 

After writing about the growing challenges and responsibilities facing medical professionals and patients last week, I happened upon two posts about the burnout rates for the professionals who are charged with promoting health and healing in the rest of us.

The first was a PBS Newshour segment about the extent of the problem and, more recently, some possible cures. It cited studies from recent years that doctors commit suicide at twice the rate of the general population, that 86 percent of the nurses at one hospital met the criteria for burnout syndrome, that 22 percent had symptoms for post-traumatic stress disorder, and that the PTSD numbers for critical care nurses were comparable to those of war veterans returning from Afghanistan and Iraq. The reporter described what is happening as “a public health crisis.”  In a small ray of hope, providers have also begun to create outlets—like arts programs—so healthcare workers can “process some of the trauma” they are experiencing on a daily basis and begin to recover.

The second post, in Fast Company, discussed the most stressful jobs that are being done by women today, including the category “nurses, psychiatric healthcare provides and home health aides.” It noted that registered nurses are 14-16% more likely to have poor cardiovascular health than the rest of the workforce, “a surprising result” because the job is so physically active and nurses are more knowledgeable about the risk factors for cardiovascular disease than the workforce in general.

Several of you who work in the health care industry wrote to me this week about your experiences at work, which (sadly) mirror these discouraging reports.

The other follow-up development relates to the data that is being gathered from patients by the health care industry. Earlier this week, the Wall Street Journal reported hat Google had struck “a secret deal” with Ascension, one of the nation’s largest hospital systems, to gather and analyze patient data including lab results, doctor diagnoses and hospital records. Called Project Nightingale by Google and “Project Nightmare” by others, the data extraction and analysis “amounts to a complete health history, including patient names and dates of birth.” Having all of our medical information instantly available for analysis in one place is clearly a game changer.

The first alarm bells sounded about Project Nightingale involved the privacy of patient data. (Indeed, the day after its initial report, the Journal reported that the government had launched an investigation into Google’s medical data gathering on the basis of these concerns.) Among the privacy-related questions: will access to a patient’s data be restricted to practitioners who are involved in improving that patient’s outcomes? If this data can be used by others, how will it be used and how is the hospital system ensuring that those uses are consistent with that provider’s privacy policies? The governing statute, the Health Insurance Portability and Accountability Act of 1996, provides only the loosest of restrictions today. (Hospitals can share data with business partners without telling patients as long as the information is used “only to help the covered entity carry out its health care functions.”) 

On the positive side, the aggregation of patient data can facilitate more accurate diagnoses and more effective patient treatment.

Google in this case is using the data in part to design new software, underpinned by advanced artificial intelligence and machine learning that zeroes in on individual patients to suggest changes to their care.

More troubling, given Medicine’s continued drift from “profession” to “business,” is how providers can realize more profits from their patients by prescribing more medications, tests and procedures. How can patients distinguish between what they truly need to promote their healing and what is profit-making by the health care provider? As the Journal story also reports:

Ascension, the second-largest health system in the U.S., aims in part to improve patient care. It also hopes to mine data to identify additional tests that could be necessary or other ways in which the system could generate more revenue from patients, documents show.

How will patients be protected from unnecessary interventions and expense, or, unlike today, be enabled by industry reporting about medical outcomes to protect themselves? As I argued last week, the ethical responsibilities for everyone in healthcare–including for patients–are shifting in real time.
 
Earlier this year, I posted (here, here and here) on a similar Google initiative regarding smart cities. In places like Toronto, the company is helping government to gather and “crunch” data that will allow their cities to operate “smarter” and provide greater benefits for their citizens from the efficiencies that are achieved. As with Project Nightingale, there are privacy concerns that Google is attempting to address. But there are also key differences between this tech giant’s plans for monetizing citizen data in smart cities and its plans for monetizing patient data in the medical system.
 
In healthcare, your most personal information is being taken and used. This data is far more vital to your personal integrity and survival than information about your local traffic patterns or energy usage.

Moreover, in smart cities there are governments and long-established regulatory bodies that can channel citizen concerns back to government and its tech consultants, like Google. Because these interfaces are largely absent in health care, monitoring and enforcement is up to individual patients or hospital-sponsored patients’ rights committees. In other words, if you (as a patient) aren’t “watching the store,” almost no one will be doing so on your behalf.
 
To this sort of concern, Google responds both early and often, “Trust us. We’ve got your interests at heart,” but there are many reasons to be skeptical.  Another Fast Company article that was posted yesterday documented (with a series of links) some of Google’s recent history mishandling user data.

Google has gotten in trouble with European lawmakers for failing to disclose how it collects data and U.S. regulators for sucking up information on children and then advertising to them. The company has exposed the data of some 52 million users thanks to a bug in its Google+ API, a platform that has been shutdown. Even in the field of health, it has already made missteps. In 2017, the U.K.’s Information Commissioner’s Office found the way patient data was shared between the Royal Free Hospital of London and [Google affiliate] DeepMind for a health project to be unlawful. The app involved…has since controversially been moved under the Google Health umbrella. More recently, a lawsuit accused Google, the University of Chicago Medical Center, and the University of Chicago of gross misconduct in handling patient records.

Much of moving forward here depends on trust.

Will health care providers, that suddenly have the profit-making potential of big data, protect us as patients or see us only as revenue generators?

Can these providers “master the learning curve” quickly enough to prevent sophisticated consultants like Google from exploiting us, or will the fox effectively be running the chicken coop going forward?

What will Google and the other data-gatherers do to recover trust that seems to be damaged almost daily wherever their revenues depend upon selling our data to advertisers and others who want to influence us?

Is Google’s business model simply incompatible with the business model that is evolving today in health care?

As the future of medicine gets debated, we all have a say in the matter.

This post was adapted from my November 17, 2019 newsletter. When you subscribe, a new newsletter/post will be delivered to your inbox every Sunday morning.

Filed Under: *All Posts, Being Part of Something Bigger than Yourself, Continuous Learning Tagged With: Ascension Health System, big data, Google, health care industry, medical professional burnout, nurse burnout, patient due dilligence, patient privacy rights, Project Nightingale, provider profit motives, PTSD in medical profession, unnecessary medical treatments

The Roles of Doctor and Patient Are Changing

November 11, 2019 By David Griesing Leave a Comment

On a train to New York City, I found myself sitting next to a doctor from Johns Hopkins. Mid-career. Confident. As it turned out, he was also from a family of doctors. 

In his career, he said he’d alternated between research and seeing patients and I asked him if he was getting what he’d hoped out of it. He said he had at the beginning, when he could practice more the way his dad had, like taking the time he needed to treat his patients. But more recently, demands from the government and insurance providers were requiring him to spend more and more patient time gathering information and creating medical records about their visits.

It gave him “an awful choice,” he said. “I can either spend much of my patient time looking down at my pad or tablet and taking notes or I can look them in the eye. I went into medicine to establish healing relationships, it’s how I saw my dad practice, but now this beast has to be fed every day.”

“What beast,” I asked. “Because I’ve chosen to keep talking to my patients,” he responded, “I still have to record all their medical information before I forget what we talked about, so almost every night I spend between 9 p.m. and midnight ‘feeding the data beast’  because, of course, my wife and kids get to see me for an hour or so once I get home.”  “The volume of it is grinding me down,” he continued, “but our insurance system requires it. What I looked forward to as a doctor every day is getting harder to come by.”

I’ve noticed this from the other side too. When I go to a specialist or for my regular check-ups I’m faced by my doctor as well as “a record keeper” with a touch screen. I’m always asked whether “I mind” having record keepers there and can always ask them to leave if I want to talk “one-on-one,” but it changes the entire dynamic in the room. Is this visit about me or my medical information?

It’s not whether electronic record keeping is working as intended, or is actually helping to manage medical costs that caught my eye this week. Instead, it’s how the generation and use of patient data is placing more obligations (with fairly profound ethical implications) on the so-called healing arts, and how far those obligations extend beyond data privacy and confidentiality.  Among other things, it got me wondering whether even our best doctors and medical caregivers are treating us as collections of data points instead of “as whole patients” in the grind of it all.  

For centuries, a doctor’s ethical obligations have been set forth in the Hippocratic Oath, with its standards being tailored to current understandings about health and healing.  For example, to reflect our growing environmental awareness, a current version of the Oath widens the focus of care from the individual patient to the health of the community and the planet itself:

I swear to fulfill, to the best of my ability and judgment, this covenant:
 
I will respect the hard-won scientific gains of those physicians in whose steps I walk, and gladly share such knowledge as is mine with those who are to follow.
 
I will apply, for the benefit of the sick, all measures [that] are required, avoiding those twin traps of overtreatment and therapeutic nihilism.
 
I will remember that there is art to medicine as well as science, and that warmth, sympathy, and understanding may outweigh the surgeon’s knife or the chemist’s drug.
 
I will not be ashamed to say “I know not,” nor will I fail to call in my colleagues when the skills of another are needed for a patient’s recovery.
 
I will respect the privacy of my patients, for their problems are not disclosed to me that the world may know. Most especially must I tread with care in matters of life and death. If it is given me to save a life, all thanks. But it may also be within my power to take a life; this awesome responsibility must be faced with great humbleness and awareness of my own frailty. Above all, I must not play at God.
 
I will remember that I do not treat a fever chart, a cancerous growth, but a sick human being, whose illness may affect the person’s family and economic stability. 
 
My responsibility [also] includes these related problems, if I am to care adequately for the sick:
 
I will prevent disease whenever I can, for prevention is preferable to cure.
 
I will protect the environment which sustains us, in the knowledge that the continuing health of ourselves and our societies is dependent on a healthy planet.
 
I will remember that I remain a member of society, with special obligations to all my fellow human beings, those [who are] sound of mind and body as well as the infirm.
 
If I do not violate this oath, may I enjoy life and art, respected while I live and remembered with affection thereafter. May I always act so as to preserve the finest traditions of my calling and may I long experience the joy of healing those who seek my help.

In the light of today’s Hippocratic Oath, it was easy to find several of its shadows.

Can our doctors provide “warmth, sympathy and understanding” while they are also filling in the blanks in their paperwork during the few minutes they are allotted to spend with us?  
 
When it comes to my data, how is it being used, who is using it, and how exactly is “my privacy” being protected?
 
Is this data collection primarily designed to make “the business of medicine’” more cost effective and efficient, or does it also promote my health and healing?  
 
What is my responsibility as a patient, not only as a collaborator in my medical outcomes but also regarding  “the multiple lives” of the data I’m providing?
 
In these regards, some food for thought this week came in the form of a new Hippocratic Oath that has been proposed by West Coast doctor Jordan Shlain. I think you’ll agree that in some ways his proposed Oath makes our jobs as patients and our doctors’ (and other medical professionals’) jobs as healers even more fraught than they were already. 
 
Here’s Dr. Shlain’s proposed Oath, with my initial impressions [in brackets] following each of its statements.
 
1. I shall endeavor to understand what matters to the patient and actively engage them in shared decision making. I do not ‘own’ the patient nor their data. I am a trusted custodian.
         
[Instead of doctors doing and patients receiving, the emphasis on joint decision-making shares the health and healing burden more equitably. Unanswered is whether patients should own their medical data.]
 
2. I shall focus on good patient care and experience to make my profits. If I can’t do well by doing good and prove it, I don’t belong in the field of the healing arts.
 
3. I shall be transparent and interoperable. I shall allow my outcomes to be peer-reviewed.
 
[Both 2 and 3 confront “the business of medicine” squarely in the Oath, acknowledging that care should be delivered with greater transparency around a doctor’s outcomes for patients, which the data now allows. As the business of medicine publically proves its worth, patients will become more like shoppers in a marketplace. What this new reality means in terms of accessibility or quality of care is, of course, uncertain.]
 
4. I shall enable my patients the opportunity to opt in and opt out of all data sharing with non-essential medical providers at every instance.
 
[Recognizing a patient’s interest in his/her data, information will need to be disclosed about essential and non-essential users of that data and about each patient’s ability to limit how it is shared.]
 
5. I shall endeavor to change the language I use to make healthcare more understandable; less Latin, less paternal language; I shall cease using acronyms. 
 
6. I shall make all decisions as though the patient was in the room with me and I had to justify my decision to them.
 
7. I shall make technology, including artificial intelligence algorithms that assist clinicians in medical decision-making, peer-reviewable.
 
[As AI and augmented intelligence programs become more common in medicine, protecting proprietary business information should not inhibit validation of the tools a doctor is using to treat us by his or her professional peers.]
 
8. I believe that health is affected by social determinants. I shall incorporate them into my strategy.
 
[This one goes further into the community behind the patient. As Dr. Shlain argues: “Someone’s zip code can tell you more about their health than their genetic code.”]
 
9. I shall deputize everyone in my organization to surface any violations of this oath without penalty. I shall use open-source artificial intelligence as the transparency tool to monitor this oath.
 
[With doctors working until midnight to feed the data beast and stressed about market competition from other practice groups, their willingness to open themselves to these kinds of ethical challenges from within their organizations seems almost utopian, but at the same time, this part of the proposed Oath acknowledges that patient/consumers alone won’t be able to police this rapidly evolving profession.]  
  
Increasing reliance on data collection and algorithm-driven automation is changing the medical profession into a business. It also changes our jobs as patients, Where once we were passive recipients of “the healing arts,” we are now being called upon to become more engaged consumers, with rights to more information about our care and additional options in the marketplace. Moreover, we should be as concerned about the uses of our medical information as we are about how our other personal data is being used (or misused) by Google, Facebook or governmental bodies like the police and IRS. 
 
At the same time that doctors should be anticipating more changes to the Hippocratic Oath, the job of being a patient and the responsibilities that come with it are also becoming more burdensome. It’s not doctor “up here” (with all the responsibility) and patient “down there” (with almost none of it) any more. We’re confronting an uncertain future together now.

This post was adapted from my November 10, 2019 newsletter. When you subscribe, a new newsletter/post will be delivered to your inbox every Sunday morning.

Filed Under: *All Posts, Being Proud of Your Work, Work & Life Rewards Tagged With: AI and augmented realiity in medicine, doctors, electronic medical records, ethical obligations, Hippocratic Oath, medical data collection, medical professionals, medical work ethic, patient care, patient responsibilities, proposed changes to Hippocratic Oath

  • « Previous Page
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • …
  • 38
  • Next Page »

About David

David Griesing (@worklifeward) writes from Philadelphia.

Read More →

David Griesing Twitter @worklifereward

Subscribe to my Newsletter

Join all the others who have new posts, recommendations and links to explore delivered to their inboxes every week. You can read all published newsletters via the Index on the Subscribe Page.

My Forthcoming Book

WordLifeReward Book

Writings

  • *All Posts (189)
  • Being Part of Something Bigger than Yourself (87)
  • Being Proud of Your Work (27)
  • Building Your Values into Your Work (73)
  • Continuous Learning (60)
  • Daily Preparation (45)
  • Entrepreneurship (28)
  • Heroes & Other Role Models (35)
  • Introducing Yourself & Your Work (20)
  • The Op-eds (4)
  • Using Humor Effectively (12)
  • Work & Life Rewards (61)

Archives

Search this Site

Follow Me

David Griesing Twitter @worklifereward

Recent Posts

  • How Toxic Is Masculinity for Men? March 28, 2021
  • Economics Takes a Leading Role in the Biodiversity Story March 8, 2021
  • Embodied Knowledge That’s Grounded in the Places Where We Live & Work February 22, 2021
  • A Movie’s Gorgeous Take on Time, Place, Loss & Gain February 9, 2021
  • Who’s Winning Our Tugs-of-War Over On-Line Privacy & Autonomy? February 1, 2021

Navigate

  • About
    • Biography
    • Teaching and Training
  • Blog
  • Book
    • WorkLifeReward
  • Contact
  • Privacy Policy
  • Subscribe to my Newsletter
  • Terms of Use

Copyright © 2021 David Griesing. All Rights Reserved.

  • Terms of Use
  • Privacy Policy