David Griesing | Work Life Reward Author | Philadelphia

  • Blog
  • About
    • Biography
    • Teaching and Training
  • Book
    • WorkLifeReward
  • Newsletter Archive
  • Contact
You are here: Home / Blog

Facing Risks, Finding Control

November 12, 2018 By David Griesing Leave a Comment

Alex Honnold’s Free Solo Climb

Introducing some risk into your life and work can remind you what it’s like to feel alive. Not that we’re sleepwalking exactly, but if “personal comfort” trumps most other considerations, you have probably insulated yourself from anything more serious than inconvenience—and there’s a price for that.

What we do everyday can easily fall into grooves of predictability where there are few occasions to be confronted with anything surprising, let alone alarming. But if we deprive ourselves of occasions where we need to find some courage and “fall back on” ourselves to overcome our fears, what used to be called “one’s constitution” begins to slip away.

Ask yourself: “What would I do if all I had to rely upon were my wits, if I suddenly had to decide between two uncertain outcomes, if none of my insulations were there to protect me—and my only choices were either to crumble or persevere?” I’d argue that it’s good to put ourselves “on the line” from time to time and find out. It gives us a chance to get in touch with “our elemental selves,” to store up some fortitude for the next time, and to recall our bravery and resourcefulness when we could use some inspiration.

Taking some risks, facing your fears and learning something new about yourself and others have been newsletter themes before. As you know, I’m an off-the-beaten track traveler who encountered some sketchy characters in Rome (“What’s Best Is Never Free”) and a genuinely menacing one in New Orleans (“Risk Taking, Opportunity Seeking”).  The reward each time was to discover something about these cities and their people that I could not have found out any other way. On the spot, I felt more alive. And where I could have responded better, I thought about how I‘d do things differently the next time I leave my comfort zone.

The upside of taking risks also drove the migration from Asia that settled the Western Hemisphere 15,000 years ago. These new Americans didn’t stop in the first fertile valley they discovered. Instead, they pushed to the edges of nearly every corner of North, Latin and South America with astonishing speed. It was insatiable curiosity and the thrill of conquest that drove them on, despite their having to confront megafauna (really big animals with razor-sharp claws and teeth), the challenges of wilderness travel with children and elders, and a total absence of convenience stores. In his book about it, Craig Childs cited the research for the proposition that an appetite for risk is hardwired into our DNA, giving rise to human progress and the rush of adventure that quickly follow.

Two new stories this week provide additional food-for-thought about our psychological risk profiles and a literally “ground-breaking” documentary delves into the motivations behind Alex Honnold’s “ free solo” climb up the rock face of El Capitan. I hope they’ll contribute to your thinking about staying confident, willful and alive.

El Capitan

Two recent pieces in the Wall Street Journal consider fear-inducing situations from opposite directions. One, called “Using Fear to Break Out of a Funk” argues that you can raise your spirits by confronting something that scares you and building a record for bravery. The other, “Travel Mistakes That Hurt,” is about foolishly throwing caution to the wind when you’re in a vacation state of mind. Taken together, they provide something of a template for healthy risk taking.

It’s amazing what fools we can sometimes be when we’re traveling. Incapacity from drinking too much alcohol or not enough water, injuries from mopeds and other unfamiliar vehicles, assuming wild animals are “cute,” hiking or climbing beyond your physical limits, and falling off cliffs or into traffic while taking pictures of yourself. The “Travel Mistakes” article features an interview with Tim Daniel with International SOS, an organization whose travel coverage includes rescuing people from every kind of harm. Daniel says travel is disorienting for almost everyone and that when we’re inundated with all that new information we can end up focusing on the wrong things and making poor choices.

Some of us go with the first thing we’re told instead of testing its reliability. Other times we’re susceptible to “the bandwagon effect”: if others are jumping off a cliff and into the water then it must be safe for us to jump in too. We may cling to our preconceptions (this neighborhood was safe 20 years ago) whatever evidence there is to the contrary today.  Daniel argues that our blind spots always become more pronounced when we travel.

They are one reason it’s helpful to travel with companions who know you well enough to warn you about yours before it’s too late. Or if you’re traveling alone, it helps to think about your worst inclinations in advance and to keep them in mind before they get you in trouble.  Navigating the unfamiliar (including its risks) makes travel exhilarating, but to maximize the potential gains and minimize the possible losses, it helps to know the baggage that you’ve brought along with you.

On a more positive note, it turns out that “amping up the adrenaline to get out of an emotional rut” is also a prescription with some science behind it. This is the kind of “funk” we’re often trying to leave behind when we seek a break from our daily routines. Sociologist Margee Kerr has written about what happens when we face our fears about loss of control in challenging situations.

When we’re terrified, our sympathetic nervous system, which is in charge of that flight or flight response, floods the body with adrenaline and the brain with neurotransmitters such as dopamine and norepinephrine. Our blood vessels constrict, to preserve blood for muscles and organs that might need it if we decide to run. And our mind focuses on the present. The physical response lasts a few hours, but the memory is what we draw strength from.

The woman who wrote “Using Fear to Break Out of a Funk” is also a scuba diver. She explored the theory’s  immediate and long-term benefits by choosing a particularly demanding dive in Iceland, between the continental plates that separate North America from Eurasia. During the dive, she confronted her fears multiple times “but pushed through by refusing to acknowledge that quitting was an option.” As soon as she did so, she felt “strong, brave and happy.” Moreover, the memory of that experience was even stronger. Whenever she’s struggling to get through a bad day she says: “I go back to that place where I can do anything.”

Finding your control when risks give rise to fear is exhilarating at the time and empowering for as long as you can relive your resourcefulness.

Alex Climbing Up

This photo, along with the shot that tops this post, are of Alex Honnold climbing the sheer, rock face of El Capitain in Yosemite National Park without ropes or safety gear. 3000 feet of sheer granite, thousands of hand and foot holds, it took him 3 hours and 56 minutes.  What’s known as “Free Solo,” his climb was a first in the annals of rock climbing, and is the subject of a documentary that’s in theaters today.

I’m not good with heights and so far have been afraid to see it. But somebody named John Baylies was brave enough, and he described his experience this way in an on-line forum:

I judge this the scariest movie I’ve ever seen. Impossible not to get personally involved. Two big questions loom. What disease does this man suffer, that he has no fear and what the hell were the guys in animal costumes doing 1000 feet into the climb? If this were fiction it was a perfect comic relief for was the tensest 20 minutes on film.

However curious I am about the animal costumes I may just have to read about it,  but the buzz around his climb got me interested in Honnold so I tracked down a TED talk he gave along with an extended interview on Joe Rogan’s podcast since the documentary came out.  I think you’ll enjoy them too.

The highly informal Honnold-Rogan exchange provides several glimpses into the type of person who would train for 20 years with the goal of finding control while facing a succession of nearly overwhelming risks to his personal safety.  Watching and listening to Honnold talk was fascinating. Humble. Direct. Thoughtful. Articulate. The farthest thing from a daredevil, much of what drives him was revealed by Rogan’s question about all those people he must have inspired to follow in his footsteps. Honnold says simply that he guesses he would be pleased to inspire people if it were “to live an intentional life” like he has: knowing what he wants and working to achieve it.

Honnold’s TED talk elaborates on what living that way means for him. In it, he contrasts a free solo climb he completed at Half Dome (also in Yosemite) which proved unsatisfying with his encore at El Capitan, which he describes as “quite simply the best day of my life.”

At Half Dome in 2012, he never practiced beforehand and had the cocky over-confidence that he would somehow “rise to the occasion” and make it to the summit. Then he reached a point in his climb, almost 2000 feet up, where he could not find his next hand or toe-hold. Honnold knew what he had to do (a tricky maneuver) but was overcome with fear that he’d execute the move incorrectly and would likely die. After much deliberation, he did manage the move successfully and reached the top safely—but vowed that he’d never be that reckless again.

Five years later at El Capitan, Honnold worked for months on its rock face finding and memorizing every hand and foothold so there would be no surprises on the day of his climb. He removed loose rocks along his path, carrying them down in a backpack. He anticipated everything that was likely to happen and how he would respond to it in what became a highly choreographed dance.

The way that Honnold managed his fear was to leave “no room for doubt to creep in.” Always knowing his next move, his mental and physical preparation made the actual climb feel “as comfortable and natural as taking a walk in the park.”  Why did he succeed at El Capitan when he felt so much less successful at Half Dome? “I didn’t want to be a lucky climber, I wanted to be a great climber,” he said.

+ + +

Finding the calm and mastery of control in the face of risks—as big as Honnold’s or as small as any of ours might be—is always a function of preparation. To extend yourself and overcome a new challenge takes planning and visualizing what you’re likely to encounter along with understanding yourself, the mistakes you are prone to make, and the strategies you’ll employ to avoid them. In Honnold’s words, “it takes intentionality” beforehand. You have to want to do it in the right way.

The upside in taking risks and pushing your envelope isn’t found in the speculation that you’ll be able to handle whatever comes your way. You may end up being lucky, but just as likely, a group like International SOS may be coming to your rescue. On the other hand, when you’re ready to assume the risks, the rewards are becoming fully and completely alive in the moment that you face them and the recollection of your bravery and resourcefulness whenever your confidence flags.

This post is adapted from my November 11, 2018 newsletter.

Filed Under: *All Posts, Being Proud of Your Work, Daily Preparation, Heroes & Other Role Models Tagged With: Alex Honnold, comfort zone, control fear, fear, free solo, mastery, mental preparation, risk and reward, visualizing

Choosing a Future For Self-Driving Cars

November 5, 2018 By David Griesing Leave a Comment

It looks pretty fine, doesn’t it?

You’ll no longer need a car of your own because this cozy little pod will come whenever you need it. All you’ll have to do is to buy a 30- or 60-ride plan and press “Come on over and get me” on your phone.

You can’t believe how happy you’ll be to have those buying or leasing, gas, insurance and repair bills behind you, or to no longer have a monstrosity taking up space and waiting for those limited times when you’ll actually be driving it. Now you’ll be able to get where you need to go without any of the “sunk costs” because it’ll be “pay as you go.”

And go you will. This little pod will transport you to work or the store, to pick up your daughter or your dog while you kick back in comfort. It will be an always on-call servant that lets you stay home all weekend, while it delivers your groceries and take-out orders, or brings you a library book or your lawn mower from the repair shop. You’ll be the equivalent of an Amazon Prime customer who can have nearly every material need met wherever you are—but instead of “same day service,” you might have products and services at your fingertips in minutes because one of these little pods will always be hovering around to bring them to you. Talk about immediate gratification!

They will also drive you to work and be there whenever you need a ride home. In a fantasy commute, you’ll have time to unwind in comfort with your favorite music or by taking a nap. Having one of these pods whenever you want one will enable you to work from different locations, to have co-workers join you when you’re working from home, and to work while traveling instead of paying attention to the road. You can’t believe how much your workday will change.

Doesn’t all this money saving, comfort, convenience and freedom sound too good to be true? Well, let’s step back for a minute.

We thought Facebook’s free social network and Amazon’s cheap and convenient take on shopping were unbelieveably wonderful too—and many of us still do. So wonderful that we built them into the fabric of our lives in no time. In fact, we quickly claim new comforts, conveniences and cost savings as if we’ve been entitled to them all along. It’s only as the dazzle of these new technology platforms begin to fade into “taking them for granted” that we also begin to wonder (in whiffs of nostalgia and regret? in concerns for their unintended consequences?) about what we might have given up by accepting them in the first place.

Could it be:

-the loss of chunks of our privacy to advertisers and data-brokers who are getting better all the time at manipulating our behavior as consumers and citizens;

-the gutting of our Main Streets of brick & mortar retail, like book and hardware stores, and the attendant loss of centers-of-gravity for social interaction and commerce within communities; or

-the elimination of entry-level and lower-skilled jobs and of entire job-markets to automation and consolidation, the jobs you had as a teenager or might do again as you’re winding down, with no comparable work opportunities to replace them?

Were the efficiency, comfort and convenience of these platforms as “cost-free” as they were cracked up to be? Is Facebook’s and Amazon’s damage already done and largely beyond repair? Have tech companies like them been defining our future or have we?

Many of us already depend on ride-sharing companies like Uber and Lyft. They are the harbingers of a self-driving vehicle industry that promise to disrupt our lives and work in at least the following ways. They will largely eliminate the need to own a car. They will transform our transportation systems, impacting public transit, highways and bridges. They will streamline how goods and services are moved in terms of logistics and delivery. And in the process, they will change how the entire “built environment” of urban centers, suburbs, and outer ring communities will look and function, including where we’ll live and how we’ll work. Because we are in many ways “a car-driven culture,” self-driving vehicles will impact almost everything that we currently experience on a daily basis.

That’s why it is worth all of our thinking about this future before it arrives.

Our Future Highways

One way to help determine what the future should look like and how it should operate is to ask people—lots of them—what they’d like to see and what they’re concerned about. In fact, it’s an essential way to get public buy-in to new technology before some tech company’s idea of that future is looking us in the eye, seducing us with its charms, and hoping we won’t notice its uglier parts.

When it comes to self-driving cars, one group of researchers is seeking informed buy-in by using input from the public to influence the drafting of the decision-making algorithms behind these vehicles. In the so-called Moral Machine Experiment, these researchers asked people around the world for their preferences regarding  the moral choices that autonomous cars will be called upon to make so that this new technology can match human values as well as its developer’s profit motives.  In an article that just appeared in the journal Nature, the following remarks describe their ambitious objective.

With the rapid development of artificial intelligence have come concerns about how machines will make moral decisions and the major challenge of quantifying societal expectations about the ethical principles that should guide machine behaviour. To address this challenge we deployed the Moral Machine, an on-line experimental platform designed to explore the moral dilemmas faced by autonomous vehicles. This platform gathered 40 million decisions [involving individual moral preferences] in ten languages from millions of people in 233 different countries and territories. Here we describe the results of this experiment…

Never in the history of humanity have we allowed a machine to autonomously decide who shall live and who shall die, in a fraction of a second, without real-time supervision. We are going to cross that bridge any time now, and it will not happen in a distant theater of military operations; it will happen in the most mundane aspect of our lives, everyday transportation.  Before we allow our cars to make ethical decisions, we need to have a global conversation to express our preferences to the companies that will design moral algorithms, and to the policymakers who will regulate them.

For a sense of the moral guidance the Experiment was seeking, think of an autonomous car that is about to crash but cannot save everyone in its path. Which pre-programmed trajectory should it choose? One which injures (or kills) two elderly people while sparing a child? One which spares a pedestrian who is waiting to cross safely while injuring (or killing) a jaywalker? You see the kinds of moral quandaries we will be asking these cars to make. If peoples’ moral preferences can be taken into account beforehand, the public might be able to recognize “the human face” in a new technology from the beginning instead of having to attempt damage control once that technology is in use.

Strong Preferences, Weaker Preferences

To collect its data, the Moral Machine Experiment asked millions of global volunteers to consider accident scenarios that involved 9 different moral preferences: sparing humans (versus pets); staying on course (versus swerving); sparing passengers (versus pedestrians); sparing more lives (versus fewer lives); sparing men (versus women); sparing the young (versus the old); sparing pedestrians who cross legally (versus jaywalkers), sparing the fit (versus the less fit); and sparing those with higher social status (versus lower social status).

The challenges behind the Experiment were daunting and much of the article is about how the researchers conducted their statistical analysis. Notwithstanding these complexities, three “strong” moral preferences emerged globally, while certain “weaker” but statistically relevant preferences suggest the need for modifications in algorithmic programming among the three different “country clusters” that the Experiment identified.

The vast majority of participants in the Experiment expressed a “strong” moral preference for saving a life instead of refusing to swerve, saving as many lives as possible if an accident is imminent, and saving young lives wherever possible.

Among “weaker” preferences, there were variations among countries that clustered in the Northern (Europe and North America), Eastern (most of Asia) and Southern (including Latin America) Hemispheres. For example, the preference for sparing young (as opposed to old) lives is much less pronounced in countries in the Eastern cluster and much higher among the Southern cluster. Countries that are poorer and have weaker enforcement institutions are more tolerant than richer and more law abiding countries of people who cross the street illegally. Differences between hemispheres might result in adjustments to the decision-making algorithms of self-driving cars that are operated there.

When companies have data about what people view as “good” or “bad”, “better” or “worse” while a new technology is being developed, these preferences can improve the likelihood that moral harms will be identified and minimized beforehand.

Gridlock

Another way to help determine what the future should look like and how new technologies should operate is to listen to what today’s Cassandras are saying. Following their commentary and grappling with their concerns removes some of the dazzle in our hopes and grounds them more firmly in reality early on.

It lets us consider how, say, an autonomous car will fit into the ways that we live, work and interact with one another today—what we will lose as well as what we are likely to gain. For example, what industries will they change? How will our cities be different than they are now? Will a proliferation of these vehicles improve the quality of our interactions with one another or simply reinforce how isolated many of us are already in a car-dominated culture?

The Atlantic magazine hosts a regular podcast called “Crazy Genius” that asks “big questions” and draws “provocative conclusions about technology and culture” (Many thanks to reader Matt K for telling me about it!) You should know that these podcasts are free and can be easily accessed through services like iTunes and Spotify.

A Crazy Genius episode from September called “How Self-Driving Cars Could Ruin the American City” included interviews with two experts who are looking into the future of autonomous vehicles and are alarmed for reasons beyond these vehicles’ decision-making abilities. One is Robin Chase, the co-founder of Zipcar. The “hellscape” she forecasts involves everyone using self-driving cars as they become cheaper than current alternatives to do our errands, provide 10-minute deliveries and produce even more sedentary lifestyles than we have already, while clogging our roadways with traffic.

Without smart urban planning, the result will be infernal congestion, choking every city and requiring local governments to lay ever-more pavement down to service American automania.

Eric Avila is an historian at UCLA who sees self-driving cars in some of the same ways that he views the introduction of the interstate highway system in the 1950s. While these new highways provided autonomous access to parts of America that had not been accessible before, there was also a dark side. 48,000 miles of new highway stimulated interstate trade and expanded development but they also gutted urban neighborhoods, allowing the richest to take their tax revenues with them as they fled to the suburbs. “Mass transit systems [and] streetcar systems were systematically dismantled. There was national protest in diverse urban neighborhoods throughout the entire nation,” Avila recalls, and a similar urban upheaval may follow the explosion of autonomous vehicles.

Like highways, self-driving cars are not only cars they are also infrastructure. According to Avila, if we want to avoid past mistakes all of the stakeholders in this new technology will need to think about how they can make downtown areas more livable for humans instead of simply more efficient for these new machines. To reduce congestion, this may involve taxing autonomous vehicle use during certain times of day, limiting the number of vehicles in heavily traveled areas, regulating companies who operate fleets of self-driving cars, and capping private car ownership. Otherwise, the proliferation of cars and traffic would make most of our cities unlivable.

Once concerns like Chase’s and Avila’s are publicized, data about the public’s preferences (what’s better, what’s worse?) in these regards can be gathered just as they were in the Moral Machine Experiment. Earlier in my career, I ran a civic organization that attempted to improve the quality of Philadelphia city government by polling citizens anonymously about their priorities and concerns. While the organization did not survive the election of a reform-minded administration, information about the public’s preferences is always available when we champion the value of collecting it. All that’s necessary is sharing the potential problems and concerns that have been raised and asking people in a reliable and transparent manner how they’d prefer to address them.

In order to avoid the harms from technology platforms that we are facing today, the tech companies that are bringing us their marvels need to know far more about their intended users’ moral preferences than they seem interested in learning about today. With the right tools to be heard at our fingertips, we can all be involved in defining our futures.

This post is adapted from my November 4, 2018 newsletter.

Filed Under: *All Posts, Being Part of Something Bigger than Yourself, Continuous Learning Tagged With: autonomous vehicles, Crazy Genius podcast, ethics, future, future shock, machine ethics, Moral Machine Experiment, moral preferences, priorities, smart cars, tech, technology, values

Looking Out For the Human Side of Technology

October 28, 2018 By David Griesing Leave a Comment

Maintaining human priorities in the face of new technologies always feels like “a rearguard action.” You struggle to prevent something bad from happening even when it seems like it may be too late.

The promise of the next tool or system intoxicates us. Smart phones, social networks, gene splicing.  It’s the super-computer at our fingertips, the comfort of a boundless circle of friends, the ability to process massive amounts of data quickly or to short-cut labor intensive tasks, the opportunity to correct genetic mutations and cure disease. We’ve already accepted these promises before we pause to consider their costs—so it always feels like we’re catching up and may not have done so in time.

When you’re dazzled by possibility and the sun is in your eyes, who’s thinking “maybe I should build a fence?”

The future that’s been promised by tech giants like Facebook is not “the win-win” that we thought it was. Their primary objectives are to serve their financial interests—those of their founder-owners and other shareholders—by offering efficiency benefits like convenience and low cost to the rest of us. But as we’ve belattedly learned, they’ve taken no responsibility for the harms they’ve also caused along the way, including exploitation of our personal information, the proliferation of fake news and jeopardy to democratic processes, as I argued here last week.

Technologies that are not associated with particular companies also run with their own promise until someone gets around to checking them–a technology like artificial intelligence or AI for example. From an ethical perspective, we are usually playing catch up ball with them too. If there’s a buck to be made or a world to transform, the discipline to ask “but should we?” always seems like getting in the way of progress.

Because our lives and work are increasingly impacted, the stories this week throw additional light on the technology juggernaut that threatens to overwhem us and our “rearguard” attempts to tame it with our human concerns.

To gain a fuller appreciation of the problem regarding Facebook, a two-part Frontline doumentary will be broadcasting this week that is devoted to what one reviewer calls “the amorality” of the company’s relentless focus on adding users and compounding ad revenues while claiming to create the on-line “community” that all of us should want in the future.  (The show airs tomorrow, October 29 at 9 p.m. and on Tuesday, October 30 at 10 p.m. EST on PBS.)

Frontline’s reporting covers Russian election interference, Facebook’s role in whipping Myanmar’s Buddhists into a frenzy over its Rohingya minority, Russian interference in past and current election cycles, and how strongmen like Rodrigo Duterte in the Phillipines have been manipulating the site to achieve their political objectives. Facebook CEO Mark Zuckerberg’s limitations as a leader are explored from a number of directions, but none as compelling as his off-screen impact on the five Facebook executives who were “given” to James Jacoby (the documentary’s director, writer and producer) to answer his questions. For the reviewer:

That they come off like deer in Mr. Jacoby’s headlights is revealing in itself. Their answers are mealy-mouthed at best, and the defensive posture they assume, and their evident fear, indicates a company unable to cope with, or confront, the corruption that has accompanied its absolute power in the social median marketplace.

You can judge for yourself. You can also ponder whether this is like holding a gun manufacturer liable when one of its guns is used to kill somebody.  I’ll be watching “The Facebook Dilemma” for what it has to say about a technology whose benefits have obscured its harms in the public mind for longer than it probably should have. But then I remember that Facebook barely existed ten years ago. The most important lesson from these Frontline episodes may be how quickly we need to get the stars out of our eyes after meeting these powerful new technologies if we are to have any hope of avoiding their most significant fallout.

Proceed With Caution

I was also struck this week by Apple CEO Tim Cook’s explosive testimony at a privacy conference organized by the European Union.

Not only was Cook bolstering his own company’s reputation for protecting Apple users’ personal information, he was also taking aim at competitors like Google and Facebook for implementing a far more harmful business plan, namely, selling user information to advertisers, reaping billions in ad dollar revenues in exchange, and claiming the bargain is providing their search engine or social network to users for “free.” This is some of what Cook had to say to European regulators this week:

Our own information—from the everyday to the deeply personal—is being weaponized against us with military efficiency. Today, that trade has exploded into a data-industrial complex.

These scraps of data, each one harmless enough on its own, are carefully assembled, synthesized, traded, and sold. This is surveillance. And these stockpiles of personal data serve only to enrich the companies that collect them. This should make us very uncomfortable.

Technology is and must always be rooted in the faith people have in it. We also recognize not everyone sees it that way—in a way, the desire to put profits over privacy is nothing new.

“Weaponized” technology delivered with “military efficiency.” “A data-industrial complex.” One of the benefits of competition is that rivals call you out, while directing unwanted attention away from themselves. One of my problems with tech giant Amazon, for example, is that it lacks a neck-to-neck rival to police its business practices, so Cook’s (and Apple’s) motives here have more than a dollop of competitive self-interest where Google and Facebook are concerned. On the other hand, Apple is properly credited with limiting the data it makes available to third parties and rendering the data it does provide anonymous. There is a bit more to the story, however.

If data privacy were as paramount to Apple as it sounded this week, it would be impossible to reconcile Apple’s receiving more than $5 billion a year from Google to make it the default search engine on all Apple devices. However complicit in today’s tech bargains, Apple pushed its rivals pretty hard this week to modify their business models and become less cynical about their use of our personal data as the focus on regulatory oversight moves from Europe to the U.S.

Keeping Humans in the Tech Equation

Technologies that aren’t proprietary to a particular company but are instead used across industries require getting over additional hurdles to ensure that they are meeting human needs and avoiding technology-specific harms for users and the rest of us. This week, I was reading up on a positive development regarding artificial intelligence (AI) that only came about because serious concerns were raised about the transparency of AI’s inner workings.

AI’s ability to solve problems (from processing big data sets to automating steps in a manufacturing process or tailoring a social program for a particular market) is only as good as the algorithms it uses. Given concern about personal identity markers such as race, gender and sexual preference, you may already know that an early criticism of artificial intelligence was that an author of an algorithm could be unwittingly building her own biases into it, leading to discriminatory and other anti-social results.  As a result, various countermeasures are being undertaken to minimize grounding these kinds of biases in AI code. With that in mind, I read a story this week about another systemic issue with AI processing’s “explainability.”

It’s the so-called “black box” problem. If users of systems that depend on AI don’t know how they work, they won’t trust them. Unfortunately, one of the prime advantages of AI is that it solves problems that are not easily understood by users, which presents the quandary that AI-based systems might need to be “dumbed-down” so that the humans using them can understand and then trust them. Of course, no one is happy with that result.

A recent article in Forbes describes the trust problem that users of machine-learning systems experience (“interacting with something we don’t understand can cause anxiety and make us feel like we’re losing control”) along with some of the experts who have been feeling that anxiety (cancer specialists who agreed with a “Watson for Oncology” system when it confirmed their judgments but thought it was wrong when it failed to do so because they couldn’t understand how it worked).

In a positive development, a U.S. Department of Defense agency called DARPA (or Defense Advanced Research Projects Agency) is grappling with the explainability problem. Says David Gunning, a DARPA program manager:

New machine-learning systems will have the ability to explain their rationale, characterize their strengths and weaknesses, and convey an understanding of how they will behave in the future.

In other words, these systems will get better at explaining themselves to their users, thereby overcoming at least some of the trust issue.

DARPA is investing $2 billion in what it calls “third-wave AI systems…where machines understand the context and environment in which they operate, and over time build underlying explanatory models that allow them to characterize real word phenomena,” according to Gunning. At least with the future of warfare at stake, a problem like “trust” in the human interface appears to have stimulated a solution. At some point, all machine-learning systems will likely be explaining themselves to the humans who are trying to keep up with them.

Moving beyond AI, I’d argue that there is often as much “at stake” as sucessfully waging war when a specific technology is turned into a consumer product that we use in our workplaces and homes.

While there is heightened awareness today about the problems that Facebook poses, few were raising these concerns even a year ago despite their toxic effects. With other consumer-oriented technologies, there are a range of potential harms where little public dissent is being voiced despite serious warnings from within and around the tech industry. For example:

– how much is our time spent on social networks—in particular, how these networks reinforce or discourage certain of our behaviors—literally changing who we are?  
 
– since our kids may be spending more time with their smart phones than with their peers or family members, how is their personal development impacted, and what can we do to put this rabbit even partially back in the hat now that smart phone use seems to be a part of every child’s right of passage into adulthood? 
 
– will privacy and surveillance concerns become more prevalent when we’re even more surrounded than we are now by “the internet of things” and as our cars continue to morph into monitoring devices—or will there be more of an outcry for reasonable safeguards beforehand? 
 
– what are employers learning about us from our use of technology (theirs as well as ours) in the workplace and how are they using this information?

The technologies that we use demand that we understand their harms as well as their benefits. I’d argue our need to become more proactive about voicing our concerns and using the tools at our disposal (including the political process) to insist that company profit and consumer convenience are not the only measures of a technology’s impact.

Since invention of the printing press a half-millennia ago, it’s always been hard but necessary to catch up with technology and to try and tame its excesses as quickly as we can.

This post was adapted from my October 28, 2018 newsletter.

Filed Under: *All Posts, Building Your Values into Your Work, Continuous Learning Tagged With: Amazon, Apple, ethics, explainability, facebook, Google, practical ethics, privacy, social network harms, tech, technology, technology safeguards, the data industrial complex, workplace ethics

Confronting the Future of Work Together

October 21, 2018 By David Griesing Leave a Comment

Some of us believe that entrepreneurs can lead us to a better future through their drive and innovation. Steve Jobs, Bill Gates and, at least until recently, Elon Musk fill these bubbles of belief. We’ve also come to believe that these masters of business and the organizations they lead can bring us into the warmer glow of what’s good for us—and much of the rest of the world believes in this kind of progress too. Amazon brings us the near perfect shopping experience, Google a world of information at our fingertips, Uber a ride whenever we want one, Instagram pictures that capture what everyone is seeing, the Gates Foundation the end of disease as we know it…

In the process, we’ve also become a little (if not a lot) more individualist and entrepreneurial ourselves, with some of that mindset coming from the American frontier. We’re more likely to want to “go it alone” today, criticize those who lack the initiative to solve their own problems, be suspicious of the government’s helping hand, and turn away from the hard work of building communities that can problem-solve together. In the meantime, Silicon Valley billionaires will attend to the social ills we are no longer able to address through the political process with their insight and genius.

In this entrepreneurial age, has politics become little more than a self-serving proposition that gives us tax breaks and deregulation or is it still a viable way to pursue “what all of us want and need” in order to thrive in a democratic society?

Should we meet our daily challenges by emulating our tech titans while allowing them to improve our lives in the ways they see fit, or should we instead be strengthening our communities and solving a different set of problems that we all share together?

In many ways, the quality of our lives and our work in the future depends on the answer to these questions, and I’ve been reading two books this week that approach them from different angles, one that came out last year  (“Earning the Rockies: How Geography Shapes America’s Role in the World” by Robert D. Kaplan ) and the other a few weeks ago (“Winners Take All: The Elite Charade of Changing the World” by Anand Giridharadas). I recommend both of them.

There are too many pleasures in Kaplan’s “Earning the Rockies” to do justice to them here, but he makes a couple of observations that provide a useful frame for looking into our future as Americans.  As a boy, Kaplan traveled across the continent with his dad and gained an appreciation for the sheer volume of this land and how it formed the American people that has driven his commentary ever since.  In 2015, he took that road trip again, and these observations follow what he saw between the East (where he got in his car) and the West (when his trip was done):

Frontiers [like America’s] test ideologies like nothing else. There is no time for the theoretical. That, ultimately, is why America has not been friendly to communism, fascism, or other, more benign forms of utopianism. Idealized concepts have rarely taken firm root in America and so intellectuals have had to look to Europe for inspiration. People here are too busy making money—an extension, of course, of the frontier ethos, with its emphasis on practical initiative…[A]long this icy, unforgiving frontier, the Enlightenment encountered reality and was ground down to an applied wisdom of ‘commonsense’ and ‘self evidence.’ In Europe an ideal could be beautiful or liberating all on its own, in frontier America it first had to show measurable results.

[A]ll across America I rarely hear anyone discussing politics per se, even as CNN and Fox News blare on monitors above the racks of whisky bottles at the local bar…An essay on the online magazine Politico captured the same mood that I found on the same day in April, 2015 that the United States initiated an historic nuclear accord with Iran, the reporter could not find a single person at an Indianapolis mall who knew about it or cared much…This is all in marked contrast to Massachusetts where I live, cluttered with fine restaurants where New Yorkers who own second homes regularly discuss national and foreign issues….Americans [between the coasts] don’t want another 9/11 and they don’t want another Iraq War. It may be no more complex than that. Their Jacksonian tradition means they expect the government to keep them safe and hunt down and kill anyone who threatens their safety…Inside these extremes, don’t bother them with details.

Moreover, practical individualism that’s more concerned about living day to day than in making a pie-in-the-sky world is not just in the vast fly-over parts of America, but also well-represented on the coasts and (at least according to this map) in as much as half of Florida.

What do Kaplan’s heartland Americans think of entrepreneurs with their visions of social responsibility who also have the practical airs of frontier-conquering individualism?

What do the coastal elites who are farther from that frontier and more inclined towards ideologies for changing the world think about these technocrats, their companies and their solutions to our problems?

What should any of us think about these Silicon Valley pathfinders and their insistence on using their wealth and awesome technologies to “do good” for all of our sakes–even though we’ve never asked them to?

Who should be making our brave new world, us or them?

These tech chieftans and their increasingly dominant companies all live in their own self-serving bubbles according to Giridharadas in “Winners Take All.” (The quotes below are from an interview he gave about it last summer and an online book review that appeared in Knowdedge@Wharton this week).

Giridharadas first delivered his critique a couple of years ago when he spoke as a fellow at the Aspen Institute, a regular gathering of the America’s intellectual elite. He argued that these technology companies believe that all the world’s problems can be solved by their entrepreneurial brand of “corporate social responsibility,” and that their zeal for their brands and for those they want to help can be a “win-win” for both. In other words, what’s good for Facebook (Google, Amazon, Twitter, Uber, AirBnB, etc.) is also good for everyone else. The problem, said Giridharadas in his interview, is that while these companies are always taking credit for the efficiencies and other benefits they have brought, they take no responsibility whatsoever for the harms:

Mark Zuckerberg talks all the time about changing the world. He seldom calls Facebook a company — he calls it a “community.” They do these things like trying to figure out how to fly drones over Africa and beam free internet to people. And in various other ways, they talk about themselves as building the new commons of the 20th century. What all that does is create this moral glow. And under the haze created by that glow, they’re able to create a probable monopoly that has harmed the most sacred thing in America, which is our electoral process, while gutting the other most sacred thing in America, our free press.

Other harms pit our interests against theirs, even when we don’t fully realize it. Unlike a democratic government that is charged with serving every citizen’s interest, “these platform monopolists allow everyone to be part of their platform but reap the majority of benefits for themselves, and make major decisions without input from those it will affect.” According to Giridharadas, the tech giants are essentially “Leviathan princes” who treat their users like so many “medieval peasants.”

In their exercise of corporate social responsibility, there is also a mismatch between the solutions that the tech entrepreneurs can and want to bring and the problems we have that need to be solved. “Tending to the public welfare is not an efficiency problem,” Giridharadas says in his interview. “The work of governing a society is tending to everybody. It’s figuring out universal rules and norms and programs that express the value of the whole and take care of the common welfare.” By contrast, the tech industry sees the world more narrowly. For example, the fake news controversy lead Facebook not to a comprehensive solution for providing reliable informtion but to what Giridharadas calls “the Trying-to-Solve-the-Problem-with-the-Tools-that-Caused-It” quandary.

The Tech Entrepreneur Bubble

Notwithstanding these realities, ambitious corporate philanthropy provides the tech giants with useful cover—a rationale for us “liking” them however much they are also causing us harm. Giridharadas describes their two-step like this:

What I started to realize was that giving had become the wingman of taking. Generosity had become the wingman of injustice. “Changing the world” had become the wingman of rigging the system…[L]ook at Andrew Carnegie’s essay “Wealth”. We’re now living in a world created by the intellectual framework he laid out: extreme taking, followed by and justified by extreme giving.

Ironically, the heroic model of the benevolent entrepreneur is sustained by our comfort with elites “who always seem to know better” on the right and left coasts of America and with rugged individualists who have managed to make the most money in America’s heartland. These leaders and their companies combine utopian visions based on business efficiency with the aura of success that comes with creating opportunities on the technological frontier. Unfortunately, their approach to social change also tends to undermine the political debate that is necessary for the many problems they are not attempting to solve.

In Giridharadas’ mind, there is no question that these social responsibility initiatives “crowd out the public sector, further reducing both its legitimacy and its efficacy, and replace civic goals with narrower concerns about efficiency and markets.” We get not only the Bezos, Musk or Gates vision of social progress but also the further sidelining of public institutions like Congress, and our state and local governments. A far better way to create the lives and work that we want in the future is by reinvigorating our politics.

* * *

Robert Kaplan took another hard look at the land that has sustained America’s spirit until now.  Anand Giridharadas challenged the tech elites that are intent on solving our problems in ways that serve their own interests. One sees an opportunity, the other an obstacle to the future that they want. I don’t know exactly how the many threads exposed by these two books will come together and help us to confront the daunting array of challenges we are facing today, including environmental change, job loss through automation, and the failure to understand the harms from new technologies (like social media platforms, artificial intelligence and genetic engineering) before they start harming us.  Still, I think at least two of their ideas will be critical in the days ahead.

The first is our need to be skeptical of the bubbles that limit every elites’ perspective, becoming more knowledgeable as individuals and citizens about the problems we face and their possible solutions. It is resisting the temptation to give over that basic responsibility to people or companies that keep telling us they are smarter, wiser or more successful than we are and that all we have to do is to trust them given all of the wonderful things they are doing for us. We need to identify our shared dreams and figure out how to realize them instead of giving that job to somebody else.

The second idea involves harnessing America’s frontier spirit one more time. There is something about us “as a people” that is motivated by the pursuit of practical, one-foot-in-front-of–the-other objectives (instead of ideological ones) and that trusts in our ability to claim the future that we want. Given Kaplan’s insights and Giridharadas’ concerns, we need political problem-solving on the technological frontier in the same way that we once came together to tame the American West. It’s where rugged individualism joins forces with other Americans who are confronting similar challenges.

I hope that you’ll get an opportunity to dig into these books, that you enjoy them as much as I am, and that you’ll let me know what you think about them when you do.

This post was adapted from my October 21, 2018 newsletter.

Filed Under: *All Posts, Being Part of Something Bigger than Yourself, Entrepreneurship Tagged With: Anand Giridharadas, frontier, future of work, Robert D. Kaplan, rugged individualism, Silicon Valley, tech, tech entrepreneurs, technology

How Stepping Back and Really Noticing Can Change Everything

October 14, 2018 By David Griesing Leave a Comment

Pieter Bruegel’s The Battle Between Carnival and Lent

I’m frequently reminded about how oblivious I am, but I had a particularly strong reminder recently. I was in a room with around 30 other people watching a documentary that we’d be discussing when it was over. Because we’d all have a chance to share our strongest impressions and it was a group I cared about, I paid particularly close attention. I even jotted down notes from time to time as something hit me. After the highly emotional end, I led off with my four strongest reactions and then listened for the next half hour while the others described what excited or troubled them. Most startling was how many of their observations I’d missed altogether.

Some of the differences were understandable, why single “eye witness accounts” are often unreliable and we want at least 8 or 12 people on a jury to be sharing their observations during deliberations. No one catches everything, even when you’re watching closely and trying to be insightful later on. Still, I thought I was better at this.

Missing key details and reaching the wrong (or woefully incomplete) conclusions affects much of our work and many of our relationships outside of it. Emotion blinds us. Fear inhibits us from looking long and hard enough. Bias makes us see what we want to see instead of what’s truly there. To get better at noticing involves acknowledging each of these tendencies and making the effort to override them. In other words, it involves putting as little interference as possible between us and what’s staring us in the face.

As luck would have it, a couple of interactive challenges involving our perceptive abilities crossed my transom this week. Given how much I missed in the documentary, I decided to play with both of them to see if looking without prior agendas or other distractions actually improved my ability to notice what’s in front of me. It was also a nice way to take a break from our 24-7 free-for-all in politics. As I sat down to write to you, I thought you might enjoy a brief escape into “how much you’re noticing” too.

The Pieter Bruegel painting above–called “The Battle Between Carnival and Lent”–is currently part of the largest-ever exhibition of the artist’s work at Vienna’s Kunsthistoriches Museum. Bruegel is a giant among Northern Renaissance painters but most of his canvases are in Europe so too few of us have actually seen one, and when we have, they’ve been in books where it’s all but impossible to see what’s actually going on in them. As it turns out, we’ve been missing quite a lot.

Conveniently, the current survey of the artist’s work includes a website that’s devoted to “taking a closer look,” including how Bruegel viewed one of the great moral divides of his time:  between the anything goes spirit of Carnival (the traditional festival for ending the winter and welcoming the spring) and the tie-everything-down season of Lent (the interval of Christian fasting and penance before Good Friday and Easter). “The Battle Between Carnival and Lent” is a feast for noticing, and we’ll savor some of the highlights on its menu below.

First though, before this week I’d never heard about people who are known as “super recognizers.” They’re a very small group of men and women who can see a face (or the photo of one) and, even years later, pick that face out of a crowd with startling speed and accuracy. It’s not extraordinary memory but an entirely different way of reading and later recognizing a stranger’s face.

I heard one of these super recognizers being interviewed this week about his time tracking down suspects and missing persons for Scotland Yard. His pride at bringing a remarkable skill to a valuable use was palpable–the pure joy of finding needles in a succession of haystacks. His interviewer also talked about a link to an on-line exercise for listeners to discover whether they too might be super recognizers. In other words, you can find out how good you are “with faces” and how well you stack up with your peers at recognizing them later on by testing your noticing skills here.  Please let me know whether I’ve helped you to find a new and, from all indications, highly rewarding career. (The test’s administrators will be following up with you if you make the grade.)

Now back to Bruegel.

You can locate this central scene in “The Battle Between Carnival and Lent” in the lower middle range of the painting. Zooming in on it also reveals Bruegel’s greatest innovation as a painter. He gives us a birds-eye view of the full pageant of life that embraces his theme. It’s not the entire picture of “what it was like” in a Flemish town 500 years ago, but viewers had never before been able to get this close to “that much of it” before.

It’s also a canvas populated by peasants and merchants as opposed to saints and nobles. They are alone or in small groups, engaged in their own distinct activities while seemingly ignoring everyone else. In the profusion of life, it’s as if we dropped into the center of any city during lunch hour to eavesdrop.

The painting’s details show a figure representing Carnival on the left. He’s fat, riding a beer barrel and wearing a meat pie as a headdress. Clearly a butcher—from the profession that enabled much of the festival’s feasting—he holds a long spit with a roasted pig as his weapon for the battle to come. Lent, on the other hand, is a grim and gaunt male figure dressed like a nun, sitting on a cart drawn by a monk and real nun. The wagon holds traditional Lenten foods like pretzels, waffles and mussels, and Lent’s weapon of choice is an oven paddle holding a couple of fish, an apparent allusion to the parable of Jesus multiplying the loaves and the fishes for a hungry crowd. On one level then, the fight is over what we should eat at this time of year.

As the eye wanders beyond the comic joust, Carnival’s vicinity includes a tavern filled with revelers, on-lookers watching a popular farce called “The Dirty Bride” (that’s surely worth a closer look!) and a procession of lepers led by a bagpiper. On the other hand, Lent’s immediate orbit shows townsfolk drawing water from the well, giving alms to the poor and going to church (their airs of generosity equally worthy of closer attention).

Not unlike our divided society today, Bruegel painted while the battle for souls during the Reformation was on-going, but instead of taking sides, this painting seems to take an equal opportunity to mock hypocrisy, greed and gluttony wherever he found it, making this and others of his paintings among the first images of social protest since Romans scrawled graffiti on public walls 1200 years before. While earlier paintings by other artists carefully disguised any humor, Bruegel wants you to laugh with him at this spectacle of human folly.

It’s been argued that Bruegel also brings a more serious purpose to his light heartedness, criticizing the common folk by personifying them as a married couple guided by a fool with a burning torch—an image that can be found in almost in the exact center of the painting. The way they are being led suggests that they follow their distractions and baser instincts instead of reason and good judgment. Reinforcing the message is a rutting pig immediately below them (you can find more of him later), symbolizing the destruction that oblivious distraction can leave in its wake.

Everywhere else Bruegel invites his viewers to draw their own conclusions. You can follow this link and notice for yourself the remarkable details of this painting along with others by the artist.  Navigate the way that you would on a Google Map, by clicking the magnifying glass (+) or (-) to zoom in and out, while dragging your cursor to move around the canvas. Be sure to let me know whether you happen upon any of the following during your exploration (the circle dance, the strangely-clad gamblers with their edible game board, the man emptying a bucket on the head of a drunk) and whether you think Carnival or Lent seems to have won the battle.

Before wishing you a good week, I have a final recommendation that brings what we notice (say in a work of art) back to what we notice or fail to notice about one another every day.

The movie Museum Hours is about the relationship that develops between an older man and woman shortly after they meet. Johann used to be a road manager for a hard-rock band but now is a security guard at the same museum in Vienna that houses the Bruegel paintings. Anne has traveled from Canada to visit a cousin who’s been hospitalized and meets Johann as she traverses a strange city. During her visit, he becomes her interpreter, advocate for her cousin’s medical care, and eventually her tour guide.  But just as he finds “the spectacle of spectatorship” at the museum “endlessly interesting” as he takes it in everyday, they both find the observations that they make about one another in the city’s coffee shops and bistros surprising and comforting.

Museum Hours is a movie about the rich details that are often overlooked in our exchanges with one another and that a super observer like Bruegel brings to his examination of everyday life. One of the film’s many reveals takes place in a scene between a tour guide at the museum (who is full of her own insights) and a group of visitors with their unvarnished interpretations in front of  “The Battle Between Carnival and Lent” and other Bruegel paintings. You can view that film clip here, and ask yourself whether the guide is helping the visitors to see what is in front of them or diverting their attention away from it.

As we shuttle between two adults in deepening conversation and very different kinds of exchanges across Vienna, Museum Hours asks several questions, including what any of us hopes to gain from looking at famous paintings on the walls of a museum. As one of the movie’s reviewers wondered:

“Is it to look at fancy paintings and feel cultured, or is it to experience something more direct: to dare to unsheathe oneself of one’s expectations and inhibitions, and truly embrace what a work of art can offer? And then, how could one carry that open mindset to embrace all of life itself? With patient attention and quiet devotion, these are challenges that this film dares to tackle.”

That much open-mindedness is a heady prescription, and probably impossible to manage. But sometimes it’s good to be reminded about how much we’re missing, to remove at least some of our blinders, and to discover what we can still manage to notice when we try.

Note: this post was adapted from my October 14, 2018 Newsletter.

Filed Under: *All Posts, Continuous Learning, Daily Preparation, Using Humor Effectively, Work & Life Rewards Tagged With: bias, Bruegel, distraction, Museum Hours, noticing, perception, seeing clearly, skill of noticing, super recognizers, the Battle Between Carnival and Lent

  • « Previous Page
  • 1
  • …
  • 22
  • 23
  • 24
  • 25
  • 26
  • …
  • 49
  • Next Page »

About David

David Griesing (@worklifeward) writes from Philadelphia.

Read More →

Subscribe to my Newsletter

Join all the others who have new posts, recommendations and links to explore delivered to their inboxes every week. Please subscribe below.

David Griesing Twitter @worklifereward

My Forthcoming Book

WordLifeReward Book

Search this Site

Recent Posts

  • Storytelling Our Way to a Nobler Time February 3, 2026
  • What To Write About? December 2, 2025
  • More House-Cleaning, Less Judgment in Politics November 16, 2025
  • Has America Decided It’s Finally Had Enough? October 2, 2025
  • Will Our Comics Get the Last Laugh? September 23, 2025

Follow Me

David Griesing Twitter @worklifereward

Copyright © 2026 David Griesing. All Rights Reserved.

  • Terms of Use
  • Privacy Policy