David Griesing | Work Life Reward Author | Philadelphia

  • Blog
  • About
    • Biography
    • Teaching and Training
  • Book
    • WorkLifeReward
  • Newsletter Archive
  • Contact
You are here: Home / Archives for David Griesing

Finding the Will to Protect Our Humanity

December 16, 2019 By David Griesing Leave a Comment

I want to share with you a short, jarring essay I read in the New York Times this week, but first a little background. 
 
For some time now, I’ve been worrying about how tech companies (and the technologies they employ) harm us when they exploit the very qualities that make us human, like our curiosity and pleasure-seeking. Of course, one of the outrages here is that companies like Google and Facebook are also monetizing our data while they’re addicting us to their platforms. But it’s the addiction-end of this unfortunate deal (and not the property we’re giving away) that bothers me most, because it cuts so close to the bone. When they exploit us, these companies are reducing our autonomy–or the freedom to act that each of us embodies. 
 
Today, it’s advertising dollars from our clicking on their ads, but  tomorrow, it’s mind-control or distraction addiction: the alternate (and equally terrible) futures that George Orwell and Aldous Huxley were worried about 80 years ago in the cartoon essay I shared with you a couple of weeks ago.
 
In “These Tech Platforms Threaten Our Freedom,” a post from exactly a year ago, I tried to argue that the price for exchanging our personal data for “free” search engines, social networks and home deliveries is giving up more and more control over our thoughts and willpower. Instead of responding “mindlessly” to tech company come-ons, we could pause, close our eyes, and re-think our knee-jerk reactions before clicking, scrolling, buying and losing track of what we should really want. 
 
But is this mind-check even close to enough?
 
After considering the addictive properties of on-line games (particularly for adolescent boys) in a post last March, the reply was a pretty emphatic “No!”  Games like Fortnite are using the behavioral information they syphon from young players to reduce their ability to exit the game and start eating, sleeping, doing homework, going outside or interacting (live and in person) with friends and family.
 
But until this week, I never thought that maybe our human brains aren’t wired to resist the distracting, addicting and autonomy-sapping power of these technologies. 
 
Maybe we’re at the tipping point where our “fight or flight” instincts are finally over-matched.
 
Maybe we are already inhabiting Orwell’s and Huxley’s science fiction. 
 
(Like with global warming, I guess I still believed that there was time for us to avoid technology’s harshest consequences.)
 
When I read Tristan Harris’s essay “Our Brains Are No Match for Our Technology” this week, I wanted to know the science, instead of the science fiction, behind its title. But Harris begins with more of a conclusion than a proof, quoting one of the late 20th Century’s most creative minds, Edward O. Wilson. When asked a decade ago whether the human race would be able to overcome the crises that will confront us over the next hundred years, Wilson said:

Yes, if we are honest and smart. [But] the real problem of humanity is [that] we have Paleolithic emotions, medieval institutions and godlike technology.

Somehow, we have to find a way to reduce this three-part dissonance, Harris argues. But in the meantime, we need to acknowledge that “the natural capacities of our brains are being overwhelmed” by technologies like smartphones and social networks.

Even if we could solve the data privacy problem, humanity will still be reduced to distraction by encouraging our self-centered pleasures and stoking our fears. Echoing Huxley in Brave New World, Harris argues that “[o]ur addiction to social validation and bursts of ‘likes’ would continue to destroy our attention spans.” Echoing Orwell in Animal Farm, Harris is equally convinced that “[c]ontent algorithms would continue to drive us down rabbit holes toward extremism and conspiracy theories.” 

While technology’s distractions reduce our ability to act as autonomous beings, its impact on our primitive brains also “compromises our ability to take collective action” with others.

[O]ur Paleolithic brains aren’t build for omniscient awareness of the world’s suffering. Our online news feeds aggregate all the world’s pain and cruelty, dragging our brains into a kind of learned helplessness. Technology that provides us with near complete knowledge without a commensurate level of agency isn’t humane….Simply put, technology has outmatched our brains, diminishing our capacity to address the world’s most pressing challenges….The attention [or distraction] economy has turned us into a civilization maladapted for its own survival.

Harris argues that we’re overwhelmed by 24/7 genocide, oppression, environmental catastrophe and political chaos; we feel “helpless” in the face of the over-load; and our technology leaves us high-and-dry instead of providing us with the means (or the “agency”) to feel that we could ever make a difference. 
 
Harris’s essay describes technology’s assault on our autonomy—on our free will to act—but he never describes or provides scientific support for why our brain wiring is unable to resist that assault in the first place. It left me wondering: are all humans susceptible to distraction and manipulation from online technologies or just some of us, to some extent, some of the time? 
 
Harris heads an organization called the Center for Humane Tech, but its website (“Our mission is to reverse human downgrading by realigning technology with our humanity”) only scratches the surface of that question. 
 
For example, it links to a University of Chicago study involving the distraction that’s caused by smartphones we carry with us, even when they’re turned off. These particular researchers theorized that having these devices nearby “can reduce cognitive capacity by taxing the attentional resources that reside at the core of both working memory and fluid intelligence.”  In other words, we’re so preoccupied when our smartphones are around that our brain’s ability to process information is reduced. 
 
I couldn’t find additional research on the site, but I’m certain there was a broad body of knowledge fueling Edward O. Wilson’s concern, ten years ago, about the misalignment of our emotions, institutions and technology. It’s the state of today’s knowledge that could justify Harris’s alarm about what is happening when “our Paleolithic brains” confront “our godlike technologies,” and I’m sure he’s familiar with these findings.  But that research needs to be mustered and conclusions drawn from it so we can understand, as an impacted community, the risks that “our brains” actually face, and then determine together how to protect ourselves from it. 

To enable us to reach this capable place, science needs to rally (as it did in an open letter about artificial intelligence and has been doing on a daily basis to confront global warming) and make its best case about technology’s assault on human autonomy. 
 
If our civilization is truly “maladapted to its own survival,” we need to find our “agency” now before any more of it is lost. But we can only move beyond resignation when our sense of urgency arises from a well-understood (and much chewed-upon) base of knowledge. 

This post was adapted from my December 15, 2019 newsletter. When you subscribe, a new newsletter/post will be delivered to your inbox every Sunday morning.

Filed Under: *All Posts, Being Part of Something Bigger than Yourself, Daily Preparation Tagged With: agency, Aldous Huxley, autonomy, distraction, free will, George Orwell, human tech, humane technology, instincts, on-line addiction, technology, Tristan Harris

The Good Work of Getting What We Need As Patients

December 2, 2019 By David Griesing Leave a Comment

Since recent posts here and here about work in healthcare—discussing burnout among health professionals, concerns about misuse of patient data, and questions about who is policing our rapidly changing health system—I’ve continued to follow developments in the field.  
 
Over the past few weeks, some of you have also shared your troubled reactions about how work in healthcare has been evolving.
 
The net of these developments is that while there are grounds for alarm about the uses of our health data, its proliferation presents some extraordinary opportunities too. Concepts like “precision medicine” become more realistic as the amount and quality of the data improves. More and better data will also help us to live longer and healthier lives. On the other hand, whether AI and other data-related technologies can enable us to improve the quality of the work experience for millions of healthcare professionals is a stubbornly open question.
 
In this last healthcare-related post for a while, there are two, practical rules of thumb that might give us greater sense of control over how our healthcare data is being used, as well as a couple of ways in which more and better health-related information is already producing better patient outcomes.
 
The good work of getting the healthcare that we need as patients (both for ourselves and for others that we’re caring for) requires healthy doses of optimism as well as pessimism, together with understanding as much as we can about when excitement or alarm are warranted.

Two Rules of Thumb To Inhibit Misuse of Our Medical Data

The first rule of thumb involves insisting upon as much transparency as possible around the uses of our medical information. That includes knowing who is using it (beyond the healthcare provider) and minimizing the risks of anyone’s misuse of it.

Unfortunately, more of this burden falls on patients today. As health systems increasingly look to their bottom lines, they may be less incentivized to protect our personal data streams. And even when our interests are aligned, doctors and hospitals may not be able to protect our data adequately. As I wondered here a couple of weeks ago: “Can these providers ‘master the learning curve’ [of big data-related technologies] quickly enough to prevent sophisticated consultants like Google from exploiting us, or will the fox effectively be running the chicken coop going forward?”

An article last weekend in the Wall Street Journal called “Your Medical Data Isn’t As Safe As You Think It Is” raised a couple of additional issues. As patients, we may be lulled into complacency by the fact that much of our data is rendered “anonymous” (or stripped of our personal identifiers) before it is shared in big databases. But as this article describes at length, “de-identified” data in the hands of one of the tech companies can easily be “triangulated” with other data they already have on you to track your medical information back to you. That means they remain able to target you personally in ways you can imagine and some you cannot.

Moreover, even if it remains anonymous, your medical data “in a stranger’s hands” may still come back to haunt you. As one expert in data sharing observed, companies that monetize personal data currently provide very little information about their operations. That means we know some of the risks to us but are in the dark about others. Of the known risks around data dispersal, you may suddenly find yourself paying higher health-related insurance premiums or barred from obtaining any coverage at all:

Google will be in a good position to start selling actuarial tables to insurance companies—like predictions on when a white male in his 40s with certain characteristics might be likely to get sick and expensive. When it comes to life and disability insurance, antidiscrimination laws are weak, he says. ‘That’s what creates the risk of having one entity having a really godlike view of you as a person that can use it against you in ways you wouldn’t even know.’

Our first rule of thumb as customers in the health system is to insist upon transparency around how our providers are sharing our medical information, along with the right to prevent it from being shared if we are concerned about how it is will be used or who will be using it.
 
The second rule of thumb has always existed in healthcare, but may be more important now than ever. You should always be asking: is my medical information going to be used in a way that’s good for me?  If it’s being used solely to maximize Google’s revenues, the answer is clearly “No.” But if your information is headed for a health researcher’s big data set, you should ask some additional questions: “Was someone like me considered as the study was being constructed so the study’s results are likely to be relevant to me?”  “Will I be updated on the findings so my ongoing treatment can benefit from them?” (More questions about informed consent before sharing your medical data were set forth in another article this past week.) 

Of course, understanding “the benefits to you beforehand” can also help you determine whether a test, drug or treatment program is really necessary, that is, if it’s possible to assess the pros and cons with your doctor in the limited time that you have before he or she orders it.
 
With medical practitioners becoming profit (or loss) centers for health systems that operate more like businesses, the good work of protecting yourself and your loved ones from misuse of your data requires both attention and vigilance at a time when you’re likely to be pre-occupied by a range of other issues.

More and Better Data Is a Cause for Excitement Too

There is an outfit called Singularity University that holds an annual conference each year with speakers who discuss recent innovations in a range of fields. Its staff also posts weekly about the most exciting developments in technology on a platform called Singularity Hub. One of its recent posts and one of the speakers at its conference in September highlight why more and better medical data is also a cause for excitement.
 
To understand the promise of today’s medical data gathering, it helps to recall what medical information looked like until very recently. Most patient information stayed in medical offices and was never shared with anyone. When groups of patients were studied, the research results varied widely in quality and were not always reconciled with similar patient studies. Medicine advanced through peer reviewed papers and debates over relatively small datasets in scholarly journals. Big data is upending that system today.
 
For us as patients, the most exciting development is that more high quality data will give us greater control over our own health and longevity. This plays out in (at least) two ways.
 
In the first instance, big data will give each of us “better baselines” than we have today about our current health and future prospects. According to the Singularity Hub post, companies as well as government agencies are already involved in large-scale projects to:

measure baseline physiological factors from thousands of people of different ages, races, genders, and socio-economic backgrounds. The goal is to slowly build a database that paints a comprehensive picture of what a healthy person looks like for a given demographic…These baselines can then be used to develop more personalized treatments, based on a particular patient.

Although it sounds like science fiction, the goal is essentially “to build a digital twin of every patient,” using it in simulations to optimize diagnoses, prevention and treatments. It is one way in which we will have personalized treatment plans that are grounded in far more accurate baseline information than has ever been available before.
 
The second breakthrough will involve changes in what we measure, moving organized medicine from treatment of our illnesses to avoidance of most illnesses altogether and the greater longevity that comes with improved health. As these developments play out, it could become commonplace for more of us to live well beyond a hundred years.
 
At Singularity University’s conference two months ago, Dr. David Karow spoke about the data we should be collecting today to treat a broad spectrum of medical problems in their early stages and increase our life expectancy. He argues that his start-up, Human Longevity Inc., has a role to play in that future.
 
Four years ago, Karow conducted a trial involving nearly 1,200 presumably healthy individuals. In the course of giving them comprehensive medical checkups, he utilized several cutting edge diagnostic technologies. These included whole-genome and microbiome sequencing, various biochemical measurements and advanced imaging. By analyzing the data, his team found a surprisingly large number of early stage tumors, brain aneurysims, and heart disease that could be treated before they produced any lasting consequences. In another 14% of the trial participants, significant, previously undetected conditions that required immediate treatment were discovered. 
 
Karow’s argument is that we’re “not measuring what matters” today and that we should be “hacking longevity” with more pre-sympomatic diagnoses. For example, if testing indicates that you have the risk factors for developing dementia, you can minimize at least some of those risks now “because of third of the risks are modifiable.” 
 
Every start up company needs its evangelists and Karow is selling “a fountain of youth” that “starts with a fountain of data.”  This kind of personal data gathering is expensive today and not widely available but it gestures towards a future where these sorts of “deep testing” may be far more affordable and commonplace. 
 
We need these promises of more personalized and preventative medicine—the hope of a better future—to have the stamina to confront the current risks of our medical data being monetized and misused long before we ever get there. As with so many other things, we need to hold optimism in one hand, pessimism in the other, and the ability to shuttle between them.

This post was adapted from my December 1, 2019 newsletter. When you subscribe, a new newsletter/post will be delivered to your inbox every Sunday morning.

Filed Under: *All Posts, Being Part of Something Bigger than Yourself, Building Your Values into Your Work, Continuous Learning Tagged With: baseline health measures, big data, control of your data, data, ethics, health and longevity, health care industry, healthcare, misuse of patient data, pre-symptomatic diagnoses, work, work of being a patient

An Unhappy Healthcare System

November 19, 2019 By David Griesing Leave a Comment

It came as no surprise. 

After writing about the growing challenges and responsibilities facing medical professionals and patients last week, I happened upon two posts about the burnout rates for the professionals who are charged with promoting health and healing in the rest of us.

The first was a PBS Newshour segment about the extent of the problem and, more recently, some possible cures. It cited studies from recent years that doctors commit suicide at twice the rate of the general population, that 86 percent of the nurses at one hospital met the criteria for burnout syndrome, that 22 percent had symptoms for post-traumatic stress disorder, and that the PTSD numbers for critical care nurses were comparable to those of war veterans returning from Afghanistan and Iraq. The reporter described what is happening as “a public health crisis.”  In a small ray of hope, providers have also begun to create outlets—like arts programs—so healthcare workers can “process some of the trauma” they are experiencing on a daily basis and begin to recover.

The second post, in Fast Company, discussed the most stressful jobs that are being done by women today, including the category “nurses, psychiatric healthcare provides and home health aides.” It noted that registered nurses are 14-16% more likely to have poor cardiovascular health than the rest of the workforce, “a surprising result” because the job is so physically active and nurses are more knowledgeable about the risk factors for cardiovascular disease than the workforce in general.

Several of you who work in the health care industry wrote to me this week about your experiences at work, which (sadly) mirror these discouraging reports.

The other follow-up development relates to the data that is being gathered from patients by the health care industry. Earlier this week, the Wall Street Journal reported hat Google had struck “a secret deal” with Ascension, one of the nation’s largest hospital systems, to gather and analyze patient data including lab results, doctor diagnoses and hospital records. Called Project Nightingale by Google and “Project Nightmare” by others, the data extraction and analysis “amounts to a complete health history, including patient names and dates of birth.” Having all of our medical information instantly available for analysis in one place is clearly a game changer.

The first alarm bells sounded about Project Nightingale involved the privacy of patient data. (Indeed, the day after its initial report, the Journal reported that the government had launched an investigation into Google’s medical data gathering on the basis of these concerns.) Among the privacy-related questions: will access to a patient’s data be restricted to practitioners who are involved in improving that patient’s outcomes? If this data can be used by others, how will it be used and how is the hospital system ensuring that those uses are consistent with that provider’s privacy policies? The governing statute, the Health Insurance Portability and Accountability Act of 1996, provides only the loosest of restrictions today. (Hospitals can share data with business partners without telling patients as long as the information is used “only to help the covered entity carry out its health care functions.”) 

On the positive side, the aggregation of patient data can facilitate more accurate diagnoses and more effective patient treatment.

Google in this case is using the data in part to design new software, underpinned by advanced artificial intelligence and machine learning that zeroes in on individual patients to suggest changes to their care.

More troubling, given Medicine’s continued drift from “profession” to “business,” is how providers can realize more profits from their patients by prescribing more medications, tests and procedures. How can patients distinguish between what they truly need to promote their healing and what is profit-making by the health care provider? As the Journal story also reports:

Ascension, the second-largest health system in the U.S., aims in part to improve patient care. It also hopes to mine data to identify additional tests that could be necessary or other ways in which the system could generate more revenue from patients, documents show.

How will patients be protected from unnecessary interventions and expense, or, unlike today, be enabled by industry reporting about medical outcomes to protect themselves? As I argued last week, the ethical responsibilities for everyone in healthcare–including for patients–are shifting in real time.
 
Earlier this year, I posted (here, here and here) on a similar Google initiative regarding smart cities. In places like Toronto, the company is helping government to gather and “crunch” data that will allow their cities to operate “smarter” and provide greater benefits for their citizens from the efficiencies that are achieved. As with Project Nightingale, there are privacy concerns that Google is attempting to address. But there are also key differences between this tech giant’s plans for monetizing citizen data in smart cities and its plans for monetizing patient data in the medical system.
 
In healthcare, your most personal information is being taken and used. This data is far more vital to your personal integrity and survival than information about your local traffic patterns or energy usage.

Moreover, in smart cities there are governments and long-established regulatory bodies that can channel citizen concerns back to government and its tech consultants, like Google. Because these interfaces are largely absent in health care, monitoring and enforcement is up to individual patients or hospital-sponsored patients’ rights committees. In other words, if you (as a patient) aren’t “watching the store,” almost no one will be doing so on your behalf.
 
To this sort of concern, Google responds both early and often, “Trust us. We’ve got your interests at heart,” but there are many reasons to be skeptical.  Another Fast Company article that was posted yesterday documented (with a series of links) some of Google’s recent history mishandling user data.

Google has gotten in trouble with European lawmakers for failing to disclose how it collects data and U.S. regulators for sucking up information on children and then advertising to them. The company has exposed the data of some 52 million users thanks to a bug in its Google+ API, a platform that has been shutdown. Even in the field of health, it has already made missteps. In 2017, the U.K.’s Information Commissioner’s Office found the way patient data was shared between the Royal Free Hospital of London and [Google affiliate] DeepMind for a health project to be unlawful. The app involved…has since controversially been moved under the Google Health umbrella. More recently, a lawsuit accused Google, the University of Chicago Medical Center, and the University of Chicago of gross misconduct in handling patient records.

Much of moving forward here depends on trust.

Will health care providers, that suddenly have the profit-making potential of big data, protect us as patients or see us only as revenue generators?

Can these providers “master the learning curve” quickly enough to prevent sophisticated consultants like Google from exploiting us, or will the fox effectively be running the chicken coop going forward?

What will Google and the other data-gatherers do to recover trust that seems to be damaged almost daily wherever their revenues depend upon selling our data to advertisers and others who want to influence us?

Is Google’s business model simply incompatible with the business model that is evolving today in health care?

As the future of medicine gets debated, we all have a say in the matter.

This post was adapted from my November 17, 2019 newsletter. When you subscribe, a new newsletter/post will be delivered to your inbox every Sunday morning.

Filed Under: *All Posts, Being Part of Something Bigger than Yourself, Continuous Learning Tagged With: Ascension Health System, big data, Google, health care industry, medical professional burnout, nurse burnout, patient due dilligence, patient privacy rights, Project Nightingale, provider profit motives, PTSD in medical profession, unnecessary medical treatments

The Roles of Doctor and Patient Are Changing

November 11, 2019 By David Griesing Leave a Comment

On a train to New York City, I found myself sitting next to a doctor from Johns Hopkins. Mid-career. Confident. As it turned out, he was also from a family of doctors. 

In his career, he said he’d alternated between research and seeing patients and I asked him if he was getting what he’d hoped out of it. He said he had at the beginning, when he could practice more the way his dad had, like taking the time he needed to treat his patients. But more recently, demands from the government and insurance providers were requiring him to spend more and more patient time gathering information and creating medical records about their visits.

It gave him “an awful choice,” he said. “I can either spend much of my patient time looking down at my pad or tablet and taking notes or I can look them in the eye. I went into medicine to establish healing relationships, it’s how I saw my dad practice, but now this beast has to be fed every day.”

“What beast,” I asked. “Because I’ve chosen to keep talking to my patients,” he responded, “I still have to record all their medical information before I forget what we talked about, so almost every night I spend between 9 p.m. and midnight ‘feeding the data beast’  because, of course, my wife and kids get to see me for an hour or so once I get home.”  “The volume of it is grinding me down,” he continued, “but our insurance system requires it. What I looked forward to as a doctor every day is getting harder to come by.”

I’ve noticed this from the other side too. When I go to a specialist or for my regular check-ups I’m faced by my doctor as well as “a record keeper” with a touch screen. I’m always asked whether “I mind” having record keepers there and can always ask them to leave if I want to talk “one-on-one,” but it changes the entire dynamic in the room. Is this visit about me or my medical information?

It’s not whether electronic record keeping is working as intended, or is actually helping to manage medical costs that caught my eye this week. Instead, it’s how the generation and use of patient data is placing more obligations (with fairly profound ethical implications) on the so-called healing arts, and how far those obligations extend beyond data privacy and confidentiality.  Among other things, it got me wondering whether even our best doctors and medical caregivers are treating us as collections of data points instead of “as whole patients” in the grind of it all.  

For centuries, a doctor’s ethical obligations have been set forth in the Hippocratic Oath, with its standards being tailored to current understandings about health and healing.  For example, to reflect our growing environmental awareness, a current version of the Oath widens the focus of care from the individual patient to the health of the community and the planet itself:

I swear to fulfill, to the best of my ability and judgment, this covenant:
 
I will respect the hard-won scientific gains of those physicians in whose steps I walk, and gladly share such knowledge as is mine with those who are to follow.
 
I will apply, for the benefit of the sick, all measures [that] are required, avoiding those twin traps of overtreatment and therapeutic nihilism.
 
I will remember that there is art to medicine as well as science, and that warmth, sympathy, and understanding may outweigh the surgeon’s knife or the chemist’s drug.
 
I will not be ashamed to say “I know not,” nor will I fail to call in my colleagues when the skills of another are needed for a patient’s recovery.
 
I will respect the privacy of my patients, for their problems are not disclosed to me that the world may know. Most especially must I tread with care in matters of life and death. If it is given me to save a life, all thanks. But it may also be within my power to take a life; this awesome responsibility must be faced with great humbleness and awareness of my own frailty. Above all, I must not play at God.
 
I will remember that I do not treat a fever chart, a cancerous growth, but a sick human being, whose illness may affect the person’s family and economic stability. 
 
My responsibility [also] includes these related problems, if I am to care adequately for the sick:
 
I will prevent disease whenever I can, for prevention is preferable to cure.
 
I will protect the environment which sustains us, in the knowledge that the continuing health of ourselves and our societies is dependent on a healthy planet.
 
I will remember that I remain a member of society, with special obligations to all my fellow human beings, those [who are] sound of mind and body as well as the infirm.
 
If I do not violate this oath, may I enjoy life and art, respected while I live and remembered with affection thereafter. May I always act so as to preserve the finest traditions of my calling and may I long experience the joy of healing those who seek my help.

In the light of today’s Hippocratic Oath, it was easy to find several of its shadows.

Can our doctors provide “warmth, sympathy and understanding” while they are also filling in the blanks in their paperwork during the few minutes they are allotted to spend with us?  
 
When it comes to my data, how is it being used, who is using it, and how exactly is “my privacy” being protected?
 
Is this data collection primarily designed to make “the business of medicine’” more cost effective and efficient, or does it also promote my health and healing?  
 
What is my responsibility as a patient, not only as a collaborator in my medical outcomes but also regarding  “the multiple lives” of the data I’m providing?
 
In these regards, some food for thought this week came in the form of a new Hippocratic Oath that has been proposed by West Coast doctor Jordan Shlain. I think you’ll agree that in some ways his proposed Oath makes our jobs as patients and our doctors’ (and other medical professionals’) jobs as healers even more fraught than they were already. 
 
Here’s Dr. Shlain’s proposed Oath, with my initial impressions [in brackets] following each of its statements.
 
1. I shall endeavor to understand what matters to the patient and actively engage them in shared decision making. I do not ‘own’ the patient nor their data. I am a trusted custodian.
         
[Instead of doctors doing and patients receiving, the emphasis on joint decision-making shares the health and healing burden more equitably. Unanswered is whether patients should own their medical data.]
 
2. I shall focus on good patient care and experience to make my profits. If I can’t do well by doing good and prove it, I don’t belong in the field of the healing arts.
 
3. I shall be transparent and interoperable. I shall allow my outcomes to be peer-reviewed.
 
[Both 2 and 3 confront “the business of medicine” squarely in the Oath, acknowledging that care should be delivered with greater transparency around a doctor’s outcomes for patients, which the data now allows. As the business of medicine publically proves its worth, patients will become more like shoppers in a marketplace. What this new reality means in terms of accessibility or quality of care is, of course, uncertain.]
 
4. I shall enable my patients the opportunity to opt in and opt out of all data sharing with non-essential medical providers at every instance.
 
[Recognizing a patient’s interest in his/her data, information will need to be disclosed about essential and non-essential users of that data and about each patient’s ability to limit how it is shared.]
 
5. I shall endeavor to change the language I use to make healthcare more understandable; less Latin, less paternal language; I shall cease using acronyms. 
 
6. I shall make all decisions as though the patient was in the room with me and I had to justify my decision to them.
 
7. I shall make technology, including artificial intelligence algorithms that assist clinicians in medical decision-making, peer-reviewable.
 
[As AI and augmented intelligence programs become more common in medicine, protecting proprietary business information should not inhibit validation of the tools a doctor is using to treat us by his or her professional peers.]
 
8. I believe that health is affected by social determinants. I shall incorporate them into my strategy.
 
[This one goes further into the community behind the patient. As Dr. Shlain argues: “Someone’s zip code can tell you more about their health than their genetic code.”]
 
9. I shall deputize everyone in my organization to surface any violations of this oath without penalty. I shall use open-source artificial intelligence as the transparency tool to monitor this oath.
 
[With doctors working until midnight to feed the data beast and stressed about market competition from other practice groups, their willingness to open themselves to these kinds of ethical challenges from within their organizations seems almost utopian, but at the same time, this part of the proposed Oath acknowledges that patient/consumers alone won’t be able to police this rapidly evolving profession.]  
  
Increasing reliance on data collection and algorithm-driven automation is changing the medical profession into a business. It also changes our jobs as patients, Where once we were passive recipients of “the healing arts,” we are now being called upon to become more engaged consumers, with rights to more information about our care and additional options in the marketplace. Moreover, we should be as concerned about the uses of our medical information as we are about how our other personal data is being used (or misused) by Google, Facebook or governmental bodies like the police and IRS. 
 
At the same time that doctors should be anticipating more changes to the Hippocratic Oath, the job of being a patient and the responsibilities that come with it are also becoming more burdensome. It’s not doctor “up here” (with all the responsibility) and patient “down there” (with almost none of it) any more. We’re confronting an uncertain future together now.

This post was adapted from my November 10, 2019 newsletter. When you subscribe, a new newsletter/post will be delivered to your inbox every Sunday morning.

Filed Under: *All Posts, Being Proud of Your Work, Work & Life Rewards Tagged With: AI and augmented realiity in medicine, doctors, electronic medical records, ethical obligations, Hippocratic Oath, medical data collection, medical professionals, medical work ethic, patient care, patient responsibilities, proposed changes to Hippocratic Oath

Nostalgia Can Help Us Build a Better Future

October 29, 2019 By David Griesing Leave a Comment

There is a widespread consensus that we’re on the cusp of a workplace revolution that will automate millions of jobs and replace millions of workers. 

Among the many questions is whether these displaced workers will still be able to support themselves because technologies that are on the rise, like augmented and artificial intelligence, will spawn millions of new jobs and a new prosperity.

Those fearing that far more jobs will be eliminated than created have argued for fixes like a universal basic income that would place a minimum financial floor under every adult while ensuring that society doesn’t dissolve into chaos. How this safety net would be paid for and administered has always been far less clear in these proposals.

Others are arguing that the automation revolution will usher in a new era of flourishing, with some new jobs maintaining and safeguarding the new automated systems, and many others that we can’t even imagine yet. However, these new programming and maintainence jobs won’t be plentiful enough to replace the “manual” jobs that will be lost in our offices, factories and transportation systems. Other “replacement jobs” might also be scarce. In a post last January, I cited John Hagel’s argument that most new jobs will bunch towards the innovative, the most highly skilled, what he called “the scaling edge” of the job spectrum.

On the other hand, analysts who have considered the automation revolution at McKinsey Global Institute noted in a July, 2019 report that automation will also produce a burst of productivity and profitability within companies, that employees will be able to work more efficiently and reduce their time working (5-hour days or 4- day work weeks) while gaining more leisure time. With more routine tasks being automated, McKinsey estimates that the growing need to customize products and services for consumers with more time on their hands will create new companies and an avalanche of new jobs to serve them. At the same time, demands for more customization of existing products and services will create new jobs that require “people skills” in offices and on factory floors.  

As we stand here today, it is difficult to know whether we should share Hagel’s concern or McKinsey’s optimism.

Predicting the likely impacts at the beginning of a workplace revolution is hardly an exact science. To the extent that history is a teacher, those with less education, fewer high-level skills and difficulties adapting to changing circumstances will be harmed the most. Far less certain are the impacts on the rest of us, whose education, skill levels and adaptability are greater but who may be less comfortable at the “scaling” edges of our industries.

Then there’s the brighter side. Will we be paid the same (or more) as we are today given the greater efficiency and productivity that automation will provide?  Will we work less but still have enough disposable income to support all of the new companies and workers who eager to serve our leisure time pursuits?  Maybe. 

It is also possible to imagine scenarios where millions of people lose their livelihoods and government programs becomes “the last resort” to maintain living standards. Will vast new bureaucracies administer the social safety nets that will be required? Will the taxes on an increasingly productive business sector (with their slimmed down payrolls) be enough to support these programs? Will those who want to work have sufficient opportunities for re-training to fill the new jobs that are created?  And even more fundamentally, will we be able to accommodate the shift from free enterprise to something that looks a lot more like a welfare state?

While most of us have been dominated by the daily tremors and upheavals in politics, there are also daily tremors and upheavals that are changing how we work and even whether we’ll be able to work for “a livable wage” if we want to.

As I argued recently in The Next Crisis Will Be a Terrible Thing to Waste, the chance to realize your priorities improve significantly during times of disruption as long as you’re clear about your objectives and have done some tactical planning in advance. As you know, I also believe in the confidence that comes with hope OR that you can change things for the better if you believe enough in the future that you’re ready to act on its behalf.

Beyond finding and continuing to do “good work” in this new economy, I listed my key priorities in that post: policies that support thriving workers, families and communities and not just successful companies; jobs that assume greater environmental stewardship as essential to their productivity; and expanding the notion of what it means for a company “to be profitable” for all of its stakeholders.

From this morning’s perspective—and assuming that the future of work holds at least as much opportunity as misfortune—I’ve been not only thinking about those priorities but also about things I miss today that seemed to exist in the past. In other words, a period of rapid change like this is also a time for what Harvard’s Svetlana Boym once called “reflective nostalgia.”  The question is how this singular mindset can fuel our passion for the objectives we want—motivate us to take more risks for the sake of change—in the turbulent days ahead.

Nostalgia isn’t about specific memories. Instead, it’s about a sense of loss, an emptiness today that you feel had once been filled in your life or work.

Unlike the kind of nostalgia that attempts to recreate a lost world from the ruins of the past, reflective nostalgia acknowledges your loss but also the impossibility of your ever recovering that former time. By establishing a healthy distance from an idealized past, reflective nostalgia liberates you to find new ways to gain something that you still need in the very different circumstances of the future that you want.

Because the urge to fill unsatisfied needs is a powerful motivator, I’ve been thinking about needs of mine that once were met, aren’t being met today, but could be satisfied again “if I always keep them in mind” while pursuing my priorities in the future. As you mull over my short list of “nostalgias” and think about yours, please feel free to drop me a line about losses you’d like to recoup in a world that’s on the cusp of reinvention.

MY SHORT LIST OF LOSSES:

– I miss a time when strangers (from marketers to the government) knew less about my susceptabilities and hot buttons. Today, given the on-line breadcrumbs I leave in my wake, strangers can track me, discover dimensions of my life that once were mine alone, and use that information to influence my decisions or just look over my shoulder. Re-building and protecting my private space is at the core of my ability to thrive. 

I want to own my personal data, to sell it or not as I choose, instead of having it taken from me whenever I’m on-line or face a surveillance camera in a public space. I want a right to privacy that’s created by law, shielded from technology and protected by the authorities. The rapid advance of artificial intelligence at work and outside of it gives the creation of this right particular urgency as the world shifts and the boundaries around life and work are re-drawn.

– I miss a time when I didn’t think my organized world would fall apart if my technology failed, my battery went dead, the electricity was cut off or the internet was no longer available. I miss my self-reliance and resent my dependency on machines. 

If I do have “more free time” in the future of work, I’ll push for more tech that I can fix when it breaks down and more resources that can help me to do so. I’ll advocate for more “fail-safe” back-up systems to reduce my vulnerability when my tech goes down. There is also the matter of my autonomy. I need to have greater understanding and control over the limits and possibilities of the tech tools that I use everyday because, to some degree, I am already a prisoner of my incompetence as one recent article puts it.

One possibility is that turning over [more] decisions and actions to an AI assistant creates a “nanny world” that makes us less and less able to act on our own. It’s what one writer has called the ‘Jeeves effect’ after the P.G. Wodehouse butler character who is so capable that Bertie Wooster, his employer, can get by being completely incompetent.

My real-life analogy is this. Even though I’ve had access to a calculator for most of my life, it’s still valuable for me to know how to add, subtract, multiply and divide without one. As tech moves farther beyond my ability to understand it or perform its critical functions manually, I need to maintain (or recover) more of that capability. Related to my first nostalgia, I’d meet this need by actively seeking “a healthier relationship” with my technology in my future jobs.
 
– I remember a time when I was not afraid that my lifestyle and consumption patterns were helping to degrade the world around me faster than the world’s ability to repair itself. At the same time, I know today that my absence of concern during much of my work life had more to do with my ignorance than the maintenance of a truly healthy balance between what nature was giving and humankind (including me) was taking. 

As a result, I need greater confidence that my part in restoring that balance is a core requirement of any jobs that I’ll do in the future. With my sense of loss in mind, I can encourage more sustainable ways to work (and live) to evolve.
 
-Finally, I miss a time when a company’s success included caring for the welfare of workers, families and communities instead of merely its shareholders’ profits, a model that was not uncommon from the end of World War II through the 1970s.  I miss a time, not so long ago, when workers bargained collectively and successfully for their rights and benefits on the job. I miss a time when good jobs with adequate pay and benefits along with safe working conditions were protected by carefully crafted trade protections instead of being easily eliminated as “too expensive” or “inefficient.” 
 
While this post-War period can never be recovered, a leading group of corporate executives (The Business Roundtable) recently committed their companies to serving not only their shareholders but also their other “stakeholders,” including their employees and the communities where they’re located. As millions of jobs are lost to automation and new jobs are created in the disruption that follows, I’ll have multiple opportunities as a part of “this new economy workforce” to challenge companies I work for (and with) to embrace the broader standard of profitability that I miss.

+ + +

Instead of being mired in the past, reflective nostalgia provides the freedom to seek opportunities to fill real needs that have never gone away. With this motivating mindset, the future of work won’t just happen to me. It becomes a set of possibilities that I can actually shape.

This post was adapted from my October 27, 2019 newsletter. When you subscribe, a new newsletter/post will be delivered to your inbox every Sunday morning.

Filed Under: *All Posts, Being Part of Something Bigger than Yourself, Building Your Values into Your Work, Work & Life Rewards Tagged With: artificial intelligence, augmented intelligence, automation, future of work, making the most of a crisis, reflective nostalgia, relationship with technology, sustainability, Svetlana Boym, workforce disruption

  • « Previous Page
  • 1
  • …
  • 12
  • 13
  • 14
  • 15
  • 16
  • …
  • 48
  • Next Page »

About David

David Griesing (@worklifeward) writes from Philadelphia.

Read More →

Subscribe to my Newsletter

Join all the others who have new posts, recommendations and links to explore delivered to their inboxes every week. Please subscribe below.

David Griesing Twitter @worklifereward

My Forthcoming Book

WordLifeReward Book

Search this Site

Recent Posts

  • Using AI to Help Produce Independent, Creative & Resilient Adults in the Classroom September 10, 2025
  • Will AI Make Us Think Less or Think Better? July 26, 2025
  • The Democrat’s Near-Fatal “Boys & Men” Problem June 30, 2025
  • Great Design Invites Delight, Awe June 4, 2025
  • Liberating Trump’s Good Instincts From the Rest April 21, 2025

Follow Me

David Griesing Twitter @worklifereward

Copyright © 2025 David Griesing. All Rights Reserved.

  • Terms of Use
  • Privacy Policy