David Griesing | Work Life Reward Author | Philadelphia

  • Blog
  • About
    • Biography
    • Teaching and Training
  • Book
    • WorkLifeReward
  • Newsletter Archive
  • Contact
You are here: Home / Blog

Choosing a Future For Self-Driving Cars

November 5, 2018 By David Griesing Leave a Comment

It looks pretty fine, doesn’t it?

You’ll no longer need a car of your own because this cozy little pod will come whenever you need it. All you’ll have to do is to buy a 30- or 60-ride plan and press “Come on over and get me” on your phone.

You can’t believe how happy you’ll be to have those buying or leasing, gas, insurance and repair bills behind you, or to no longer have a monstrosity taking up space and waiting for those limited times when you’ll actually be driving it. Now you’ll be able to get where you need to go without any of the “sunk costs” because it’ll be “pay as you go.”

And go you will. This little pod will transport you to work or the store, to pick up your daughter or your dog while you kick back in comfort. It will be an always on-call servant that lets you stay home all weekend, while it delivers your groceries and take-out orders, or brings you a library book or your lawn mower from the repair shop. You’ll be the equivalent of an Amazon Prime customer who can have nearly every material need met wherever you are—but instead of “same day service,” you might have products and services at your fingertips in minutes because one of these little pods will always be hovering around to bring them to you. Talk about immediate gratification!

They will also drive you to work and be there whenever you need a ride home. In a fantasy commute, you’ll have time to unwind in comfort with your favorite music or by taking a nap. Having one of these pods whenever you want one will enable you to work from different locations, to have co-workers join you when you’re working from home, and to work while traveling instead of paying attention to the road. You can’t believe how much your workday will change.

Doesn’t all this money saving, comfort, convenience and freedom sound too good to be true? Well, let’s step back for a minute.

We thought Facebook’s free social network and Amazon’s cheap and convenient take on shopping were unbelieveably wonderful too—and many of us still do. So wonderful that we built them into the fabric of our lives in no time. In fact, we quickly claim new comforts, conveniences and cost savings as if we’ve been entitled to them all along. It’s only as the dazzle of these new technology platforms begin to fade into “taking them for granted” that we also begin to wonder (in whiffs of nostalgia and regret? in concerns for their unintended consequences?) about what we might have given up by accepting them in the first place.

Could it be:

-the loss of chunks of our privacy to advertisers and data-brokers who are getting better all the time at manipulating our behavior as consumers and citizens;

-the gutting of our Main Streets of brick & mortar retail, like book and hardware stores, and the attendant loss of centers-of-gravity for social interaction and commerce within communities; or

-the elimination of entry-level and lower-skilled jobs and of entire job-markets to automation and consolidation, the jobs you had as a teenager or might do again as you’re winding down, with no comparable work opportunities to replace them?

Were the efficiency, comfort and convenience of these platforms as “cost-free” as they were cracked up to be? Is Facebook’s and Amazon’s damage already done and largely beyond repair? Have tech companies like them been defining our future or have we?

Many of us already depend on ride-sharing companies like Uber and Lyft. They are the harbingers of a self-driving vehicle industry that promise to disrupt our lives and work in at least the following ways. They will largely eliminate the need to own a car. They will transform our transportation systems, impacting public transit, highways and bridges. They will streamline how goods and services are moved in terms of logistics and delivery. And in the process, they will change how the entire “built environment” of urban centers, suburbs, and outer ring communities will look and function, including where we’ll live and how we’ll work. Because we are in many ways “a car-driven culture,” self-driving vehicles will impact almost everything that we currently experience on a daily basis.

That’s why it is worth all of our thinking about this future before it arrives.

Our Future Highways

One way to help determine what the future should look like and how it should operate is to ask people—lots of them—what they’d like to see and what they’re concerned about. In fact, it’s an essential way to get public buy-in to new technology before some tech company’s idea of that future is looking us in the eye, seducing us with its charms, and hoping we won’t notice its uglier parts.

When it comes to self-driving cars, one group of researchers is seeking informed buy-in by using input from the public to influence the drafting of the decision-making algorithms behind these vehicles. In the so-called Moral Machine Experiment, these researchers asked people around the world for their preferences regarding  the moral choices that autonomous cars will be called upon to make so that this new technology can match human values as well as its developer’s profit motives.  In an article that just appeared in the journal Nature, the following remarks describe their ambitious objective.

With the rapid development of artificial intelligence have come concerns about how machines will make moral decisions and the major challenge of quantifying societal expectations about the ethical principles that should guide machine behaviour. To address this challenge we deployed the Moral Machine, an on-line experimental platform designed to explore the moral dilemmas faced by autonomous vehicles. This platform gathered 40 million decisions [involving individual moral preferences] in ten languages from millions of people in 233 different countries and territories. Here we describe the results of this experiment…

Never in the history of humanity have we allowed a machine to autonomously decide who shall live and who shall die, in a fraction of a second, without real-time supervision. We are going to cross that bridge any time now, and it will not happen in a distant theater of military operations; it will happen in the most mundane aspect of our lives, everyday transportation.  Before we allow our cars to make ethical decisions, we need to have a global conversation to express our preferences to the companies that will design moral algorithms, and to the policymakers who will regulate them.

For a sense of the moral guidance the Experiment was seeking, think of an autonomous car that is about to crash but cannot save everyone in its path. Which pre-programmed trajectory should it choose? One which injures (or kills) two elderly people while sparing a child? One which spares a pedestrian who is waiting to cross safely while injuring (or killing) a jaywalker? You see the kinds of moral quandaries we will be asking these cars to make. If peoples’ moral preferences can be taken into account beforehand, the public might be able to recognize “the human face” in a new technology from the beginning instead of having to attempt damage control once that technology is in use.

Strong Preferences, Weaker Preferences

To collect its data, the Moral Machine Experiment asked millions of global volunteers to consider accident scenarios that involved 9 different moral preferences: sparing humans (versus pets); staying on course (versus swerving); sparing passengers (versus pedestrians); sparing more lives (versus fewer lives); sparing men (versus women); sparing the young (versus the old); sparing pedestrians who cross legally (versus jaywalkers), sparing the fit (versus the less fit); and sparing those with higher social status (versus lower social status).

The challenges behind the Experiment were daunting and much of the article is about how the researchers conducted their statistical analysis. Notwithstanding these complexities, three “strong” moral preferences emerged globally, while certain “weaker” but statistically relevant preferences suggest the need for modifications in algorithmic programming among the three different “country clusters” that the Experiment identified.

The vast majority of participants in the Experiment expressed a “strong” moral preference for saving a life instead of refusing to swerve, saving as many lives as possible if an accident is imminent, and saving young lives wherever possible.

Among “weaker” preferences, there were variations among countries that clustered in the Northern (Europe and North America), Eastern (most of Asia) and Southern (including Latin America) Hemispheres. For example, the preference for sparing young (as opposed to old) lives is much less pronounced in countries in the Eastern cluster and much higher among the Southern cluster. Countries that are poorer and have weaker enforcement institutions are more tolerant than richer and more law abiding countries of people who cross the street illegally. Differences between hemispheres might result in adjustments to the decision-making algorithms of self-driving cars that are operated there.

When companies have data about what people view as “good” or “bad”, “better” or “worse” while a new technology is being developed, these preferences can improve the likelihood that moral harms will be identified and minimized beforehand.

Gridlock

Another way to help determine what the future should look like and how new technologies should operate is to listen to what today’s Cassandras are saying. Following their commentary and grappling with their concerns removes some of the dazzle in our hopes and grounds them more firmly in reality early on.

It lets us consider how, say, an autonomous car will fit into the ways that we live, work and interact with one another today—what we will lose as well as what we are likely to gain. For example, what industries will they change? How will our cities be different than they are now? Will a proliferation of these vehicles improve the quality of our interactions with one another or simply reinforce how isolated many of us are already in a car-dominated culture?

The Atlantic magazine hosts a regular podcast called “Crazy Genius” that asks “big questions” and draws “provocative conclusions about technology and culture” (Many thanks to reader Matt K for telling me about it!) You should know that these podcasts are free and can be easily accessed through services like iTunes and Spotify.

A Crazy Genius episode from September called “How Self-Driving Cars Could Ruin the American City” included interviews with two experts who are looking into the future of autonomous vehicles and are alarmed for reasons beyond these vehicles’ decision-making abilities. One is Robin Chase, the co-founder of Zipcar. The “hellscape” she forecasts involves everyone using self-driving cars as they become cheaper than current alternatives to do our errands, provide 10-minute deliveries and produce even more sedentary lifestyles than we have already, while clogging our roadways with traffic.

Without smart urban planning, the result will be infernal congestion, choking every city and requiring local governments to lay ever-more pavement down to service American automania.

Eric Avila is an historian at UCLA who sees self-driving cars in some of the same ways that he views the introduction of the interstate highway system in the 1950s. While these new highways provided autonomous access to parts of America that had not been accessible before, there was also a dark side. 48,000 miles of new highway stimulated interstate trade and expanded development but they also gutted urban neighborhoods, allowing the richest to take their tax revenues with them as they fled to the suburbs. “Mass transit systems [and] streetcar systems were systematically dismantled. There was national protest in diverse urban neighborhoods throughout the entire nation,” Avila recalls, and a similar urban upheaval may follow the explosion of autonomous vehicles.

Like highways, self-driving cars are not only cars they are also infrastructure. According to Avila, if we want to avoid past mistakes all of the stakeholders in this new technology will need to think about how they can make downtown areas more livable for humans instead of simply more efficient for these new machines. To reduce congestion, this may involve taxing autonomous vehicle use during certain times of day, limiting the number of vehicles in heavily traveled areas, regulating companies who operate fleets of self-driving cars, and capping private car ownership. Otherwise, the proliferation of cars and traffic would make most of our cities unlivable.

Once concerns like Chase’s and Avila’s are publicized, data about the public’s preferences (what’s better, what’s worse?) in these regards can be gathered just as they were in the Moral Machine Experiment. Earlier in my career, I ran a civic organization that attempted to improve the quality of Philadelphia city government by polling citizens anonymously about their priorities and concerns. While the organization did not survive the election of a reform-minded administration, information about the public’s preferences is always available when we champion the value of collecting it. All that’s necessary is sharing the potential problems and concerns that have been raised and asking people in a reliable and transparent manner how they’d prefer to address them.

In order to avoid the harms from technology platforms that we are facing today, the tech companies that are bringing us their marvels need to know far more about their intended users’ moral preferences than they seem interested in learning about today. With the right tools to be heard at our fingertips, we can all be involved in defining our futures.

This post is adapted from my November 4, 2018 newsletter.

Filed Under: *All Posts, Being Part of Something Bigger than Yourself, Continuous Learning Tagged With: autonomous vehicles, Crazy Genius podcast, ethics, future, future shock, machine ethics, Moral Machine Experiment, moral preferences, priorities, smart cars, tech, technology, values

Looking Out For the Human Side of Technology

October 28, 2018 By David Griesing Leave a Comment

Maintaining human priorities in the face of new technologies always feels like “a rearguard action.” You struggle to prevent something bad from happening even when it seems like it may be too late.

The promise of the next tool or system intoxicates us. Smart phones, social networks, gene splicing.  It’s the super-computer at our fingertips, the comfort of a boundless circle of friends, the ability to process massive amounts of data quickly or to short-cut labor intensive tasks, the opportunity to correct genetic mutations and cure disease. We’ve already accepted these promises before we pause to consider their costs—so it always feels like we’re catching up and may not have done so in time.

When you’re dazzled by possibility and the sun is in your eyes, who’s thinking “maybe I should build a fence?”

The future that’s been promised by tech giants like Facebook is not “the win-win” that we thought it was. Their primary objectives are to serve their financial interests—those of their founder-owners and other shareholders—by offering efficiency benefits like convenience and low cost to the rest of us. But as we’ve belattedly learned, they’ve taken no responsibility for the harms they’ve also caused along the way, including exploitation of our personal information, the proliferation of fake news and jeopardy to democratic processes, as I argued here last week.

Technologies that are not associated with particular companies also run with their own promise until someone gets around to checking them–a technology like artificial intelligence or AI for example. From an ethical perspective, we are usually playing catch up ball with them too. If there’s a buck to be made or a world to transform, the discipline to ask “but should we?” always seems like getting in the way of progress.

Because our lives and work are increasingly impacted, the stories this week throw additional light on the technology juggernaut that threatens to overwhem us and our “rearguard” attempts to tame it with our human concerns.

To gain a fuller appreciation of the problem regarding Facebook, a two-part Frontline doumentary will be broadcasting this week that is devoted to what one reviewer calls “the amorality” of the company’s relentless focus on adding users and compounding ad revenues while claiming to create the on-line “community” that all of us should want in the future.  (The show airs tomorrow, October 29 at 9 p.m. and on Tuesday, October 30 at 10 p.m. EST on PBS.)

Frontline’s reporting covers Russian election interference, Facebook’s role in whipping Myanmar’s Buddhists into a frenzy over its Rohingya minority, Russian interference in past and current election cycles, and how strongmen like Rodrigo Duterte in the Phillipines have been manipulating the site to achieve their political objectives. Facebook CEO Mark Zuckerberg’s limitations as a leader are explored from a number of directions, but none as compelling as his off-screen impact on the five Facebook executives who were “given” to James Jacoby (the documentary’s director, writer and producer) to answer his questions. For the reviewer:

That they come off like deer in Mr. Jacoby’s headlights is revealing in itself. Their answers are mealy-mouthed at best, and the defensive posture they assume, and their evident fear, indicates a company unable to cope with, or confront, the corruption that has accompanied its absolute power in the social median marketplace.

You can judge for yourself. You can also ponder whether this is like holding a gun manufacturer liable when one of its guns is used to kill somebody.  I’ll be watching “The Facebook Dilemma” for what it has to say about a technology whose benefits have obscured its harms in the public mind for longer than it probably should have. But then I remember that Facebook barely existed ten years ago. The most important lesson from these Frontline episodes may be how quickly we need to get the stars out of our eyes after meeting these powerful new technologies if we are to have any hope of avoiding their most significant fallout.

Proceed With Caution

I was also struck this week by Apple CEO Tim Cook’s explosive testimony at a privacy conference organized by the European Union.

Not only was Cook bolstering his own company’s reputation for protecting Apple users’ personal information, he was also taking aim at competitors like Google and Facebook for implementing a far more harmful business plan, namely, selling user information to advertisers, reaping billions in ad dollar revenues in exchange, and claiming the bargain is providing their search engine or social network to users for “free.” This is some of what Cook had to say to European regulators this week:

Our own information—from the everyday to the deeply personal—is being weaponized against us with military efficiency. Today, that trade has exploded into a data-industrial complex.

These scraps of data, each one harmless enough on its own, are carefully assembled, synthesized, traded, and sold. This is surveillance. And these stockpiles of personal data serve only to enrich the companies that collect them. This should make us very uncomfortable.

Technology is and must always be rooted in the faith people have in it. We also recognize not everyone sees it that way—in a way, the desire to put profits over privacy is nothing new.

“Weaponized” technology delivered with “military efficiency.” “A data-industrial complex.” One of the benefits of competition is that rivals call you out, while directing unwanted attention away from themselves. One of my problems with tech giant Amazon, for example, is that it lacks a neck-to-neck rival to police its business practices, so Cook’s (and Apple’s) motives here have more than a dollop of competitive self-interest where Google and Facebook are concerned. On the other hand, Apple is properly credited with limiting the data it makes available to third parties and rendering the data it does provide anonymous. There is a bit more to the story, however.

If data privacy were as paramount to Apple as it sounded this week, it would be impossible to reconcile Apple’s receiving more than $5 billion a year from Google to make it the default search engine on all Apple devices. However complicit in today’s tech bargains, Apple pushed its rivals pretty hard this week to modify their business models and become less cynical about their use of our personal data as the focus on regulatory oversight moves from Europe to the U.S.

Keeping Humans in the Tech Equation

Technologies that aren’t proprietary to a particular company but are instead used across industries require getting over additional hurdles to ensure that they are meeting human needs and avoiding technology-specific harms for users and the rest of us. This week, I was reading up on a positive development regarding artificial intelligence (AI) that only came about because serious concerns were raised about the transparency of AI’s inner workings.

AI’s ability to solve problems (from processing big data sets to automating steps in a manufacturing process or tailoring a social program for a particular market) is only as good as the algorithms it uses. Given concern about personal identity markers such as race, gender and sexual preference, you may already know that an early criticism of artificial intelligence was that an author of an algorithm could be unwittingly building her own biases into it, leading to discriminatory and other anti-social results.  As a result, various countermeasures are being undertaken to minimize grounding these kinds of biases in AI code. With that in mind, I read a story this week about another systemic issue with AI processing’s “explainability.”

It’s the so-called “black box” problem. If users of systems that depend on AI don’t know how they work, they won’t trust them. Unfortunately, one of the prime advantages of AI is that it solves problems that are not easily understood by users, which presents the quandary that AI-based systems might need to be “dumbed-down” so that the humans using them can understand and then trust them. Of course, no one is happy with that result.

A recent article in Forbes describes the trust problem that users of machine-learning systems experience (“interacting with something we don’t understand can cause anxiety and make us feel like we’re losing control”) along with some of the experts who have been feeling that anxiety (cancer specialists who agreed with a “Watson for Oncology” system when it confirmed their judgments but thought it was wrong when it failed to do so because they couldn’t understand how it worked).

In a positive development, a U.S. Department of Defense agency called DARPA (or Defense Advanced Research Projects Agency) is grappling with the explainability problem. Says David Gunning, a DARPA program manager:

New machine-learning systems will have the ability to explain their rationale, characterize their strengths and weaknesses, and convey an understanding of how they will behave in the future.

In other words, these systems will get better at explaining themselves to their users, thereby overcoming at least some of the trust issue.

DARPA is investing $2 billion in what it calls “third-wave AI systems…where machines understand the context and environment in which they operate, and over time build underlying explanatory models that allow them to characterize real word phenomena,” according to Gunning. At least with the future of warfare at stake, a problem like “trust” in the human interface appears to have stimulated a solution. At some point, all machine-learning systems will likely be explaining themselves to the humans who are trying to keep up with them.

Moving beyond AI, I’d argue that there is often as much “at stake” as sucessfully waging war when a specific technology is turned into a consumer product that we use in our workplaces and homes.

While there is heightened awareness today about the problems that Facebook poses, few were raising these concerns even a year ago despite their toxic effects. With other consumer-oriented technologies, there are a range of potential harms where little public dissent is being voiced despite serious warnings from within and around the tech industry. For example:

– how much is our time spent on social networks—in particular, how these networks reinforce or discourage certain of our behaviors—literally changing who we are?  
 
– since our kids may be spending more time with their smart phones than with their peers or family members, how is their personal development impacted, and what can we do to put this rabbit even partially back in the hat now that smart phone use seems to be a part of every child’s right of passage into adulthood? 
 
– will privacy and surveillance concerns become more prevalent when we’re even more surrounded than we are now by “the internet of things” and as our cars continue to morph into monitoring devices—or will there be more of an outcry for reasonable safeguards beforehand? 
 
– what are employers learning about us from our use of technology (theirs as well as ours) in the workplace and how are they using this information?

The technologies that we use demand that we understand their harms as well as their benefits. I’d argue our need to become more proactive about voicing our concerns and using the tools at our disposal (including the political process) to insist that company profit and consumer convenience are not the only measures of a technology’s impact.

Since invention of the printing press a half-millennia ago, it’s always been hard but necessary to catch up with technology and to try and tame its excesses as quickly as we can.

This post was adapted from my October 28, 2018 newsletter.

Filed Under: *All Posts, Building Your Values into Your Work, Continuous Learning Tagged With: Amazon, Apple, ethics, explainability, facebook, Google, practical ethics, privacy, social network harms, tech, technology, technology safeguards, the data industrial complex, workplace ethics

Confronting the Future of Work Together

October 21, 2018 By David Griesing Leave a Comment

Some of us believe that entrepreneurs can lead us to a better future through their drive and innovation. Steve Jobs, Bill Gates and, at least until recently, Elon Musk fill these bubbles of belief. We’ve also come to believe that these masters of business and the organizations they lead can bring us into the warmer glow of what’s good for us—and much of the rest of the world believes in this kind of progress too. Amazon brings us the near perfect shopping experience, Google a world of information at our fingertips, Uber a ride whenever we want one, Instagram pictures that capture what everyone is seeing, the Gates Foundation the end of disease as we know it…

In the process, we’ve also become a little (if not a lot) more individualist and entrepreneurial ourselves, with some of that mindset coming from the American frontier. We’re more likely to want to “go it alone” today, criticize those who lack the initiative to solve their own problems, be suspicious of the government’s helping hand, and turn away from the hard work of building communities that can problem-solve together. In the meantime, Silicon Valley billionaires will attend to the social ills we are no longer able to address through the political process with their insight and genius.

In this entrepreneurial age, has politics become little more than a self-serving proposition that gives us tax breaks and deregulation or is it still a viable way to pursue “what all of us want and need” in order to thrive in a democratic society?

Should we meet our daily challenges by emulating our tech titans while allowing them to improve our lives in the ways they see fit, or should we instead be strengthening our communities and solving a different set of problems that we all share together?

In many ways, the quality of our lives and our work in the future depends on the answer to these questions, and I’ve been reading two books this week that approach them from different angles, one that came out last year  (“Earning the Rockies: How Geography Shapes America’s Role in the World” by Robert D. Kaplan ) and the other a few weeks ago (“Winners Take All: The Elite Charade of Changing the World” by Anand Giridharadas). I recommend both of them.

There are too many pleasures in Kaplan’s “Earning the Rockies” to do justice to them here, but he makes a couple of observations that provide a useful frame for looking into our future as Americans.  As a boy, Kaplan traveled across the continent with his dad and gained an appreciation for the sheer volume of this land and how it formed the American people that has driven his commentary ever since.  In 2015, he took that road trip again, and these observations follow what he saw between the East (where he got in his car) and the West (when his trip was done):

Frontiers [like America’s] test ideologies like nothing else. There is no time for the theoretical. That, ultimately, is why America has not been friendly to communism, fascism, or other, more benign forms of utopianism. Idealized concepts have rarely taken firm root in America and so intellectuals have had to look to Europe for inspiration. People here are too busy making money—an extension, of course, of the frontier ethos, with its emphasis on practical initiative…[A]long this icy, unforgiving frontier, the Enlightenment encountered reality and was ground down to an applied wisdom of ‘commonsense’ and ‘self evidence.’ In Europe an ideal could be beautiful or liberating all on its own, in frontier America it first had to show measurable results.

[A]ll across America I rarely hear anyone discussing politics per se, even as CNN and Fox News blare on monitors above the racks of whisky bottles at the local bar…An essay on the online magazine Politico captured the same mood that I found on the same day in April, 2015 that the United States initiated an historic nuclear accord with Iran, the reporter could not find a single person at an Indianapolis mall who knew about it or cared much…This is all in marked contrast to Massachusetts where I live, cluttered with fine restaurants where New Yorkers who own second homes regularly discuss national and foreign issues….Americans [between the coasts] don’t want another 9/11 and they don’t want another Iraq War. It may be no more complex than that. Their Jacksonian tradition means they expect the government to keep them safe and hunt down and kill anyone who threatens their safety…Inside these extremes, don’t bother them with details.

Moreover, practical individualism that’s more concerned about living day to day than in making a pie-in-the-sky world is not just in the vast fly-over parts of America, but also well-represented on the coasts and (at least according to this map) in as much as half of Florida.

What do Kaplan’s heartland Americans think of entrepreneurs with their visions of social responsibility who also have the practical airs of frontier-conquering individualism?

What do the coastal elites who are farther from that frontier and more inclined towards ideologies for changing the world think about these technocrats, their companies and their solutions to our problems?

What should any of us think about these Silicon Valley pathfinders and their insistence on using their wealth and awesome technologies to “do good” for all of our sakes–even though we’ve never asked them to?

Who should be making our brave new world, us or them?

These tech chieftans and their increasingly dominant companies all live in their own self-serving bubbles according to Giridharadas in “Winners Take All.” (The quotes below are from an interview he gave about it last summer and an online book review that appeared in Knowdedge@Wharton this week).

Giridharadas first delivered his critique a couple of years ago when he spoke as a fellow at the Aspen Institute, a regular gathering of the America’s intellectual elite. He argued that these technology companies believe that all the world’s problems can be solved by their entrepreneurial brand of “corporate social responsibility,” and that their zeal for their brands and for those they want to help can be a “win-win” for both. In other words, what’s good for Facebook (Google, Amazon, Twitter, Uber, AirBnB, etc.) is also good for everyone else. The problem, said Giridharadas in his interview, is that while these companies are always taking credit for the efficiencies and other benefits they have brought, they take no responsibility whatsoever for the harms:

Mark Zuckerberg talks all the time about changing the world. He seldom calls Facebook a company — he calls it a “community.” They do these things like trying to figure out how to fly drones over Africa and beam free internet to people. And in various other ways, they talk about themselves as building the new commons of the 20th century. What all that does is create this moral glow. And under the haze created by that glow, they’re able to create a probable monopoly that has harmed the most sacred thing in America, which is our electoral process, while gutting the other most sacred thing in America, our free press.

Other harms pit our interests against theirs, even when we don’t fully realize it. Unlike a democratic government that is charged with serving every citizen’s interest, “these platform monopolists allow everyone to be part of their platform but reap the majority of benefits for themselves, and make major decisions without input from those it will affect.” According to Giridharadas, the tech giants are essentially “Leviathan princes” who treat their users like so many “medieval peasants.”

In their exercise of corporate social responsibility, there is also a mismatch between the solutions that the tech entrepreneurs can and want to bring and the problems we have that need to be solved. “Tending to the public welfare is not an efficiency problem,” Giridharadas says in his interview. “The work of governing a society is tending to everybody. It’s figuring out universal rules and norms and programs that express the value of the whole and take care of the common welfare.” By contrast, the tech industry sees the world more narrowly. For example, the fake news controversy lead Facebook not to a comprehensive solution for providing reliable informtion but to what Giridharadas calls “the Trying-to-Solve-the-Problem-with-the-Tools-that-Caused-It” quandary.

The Tech Entrepreneur Bubble

Notwithstanding these realities, ambitious corporate philanthropy provides the tech giants with useful cover—a rationale for us “liking” them however much they are also causing us harm. Giridharadas describes their two-step like this:

What I started to realize was that giving had become the wingman of taking. Generosity had become the wingman of injustice. “Changing the world” had become the wingman of rigging the system…[L]ook at Andrew Carnegie’s essay “Wealth”. We’re now living in a world created by the intellectual framework he laid out: extreme taking, followed by and justified by extreme giving.

Ironically, the heroic model of the benevolent entrepreneur is sustained by our comfort with elites “who always seem to know better” on the right and left coasts of America and with rugged individualists who have managed to make the most money in America’s heartland. These leaders and their companies combine utopian visions based on business efficiency with the aura of success that comes with creating opportunities on the technological frontier. Unfortunately, their approach to social change also tends to undermine the political debate that is necessary for the many problems they are not attempting to solve.

In Giridharadas’ mind, there is no question that these social responsibility initiatives “crowd out the public sector, further reducing both its legitimacy and its efficacy, and replace civic goals with narrower concerns about efficiency and markets.” We get not only the Bezos, Musk or Gates vision of social progress but also the further sidelining of public institutions like Congress, and our state and local governments. A far better way to create the lives and work that we want in the future is by reinvigorating our politics.

* * *

Robert Kaplan took another hard look at the land that has sustained America’s spirit until now.  Anand Giridharadas challenged the tech elites that are intent on solving our problems in ways that serve their own interests. One sees an opportunity, the other an obstacle to the future that they want. I don’t know exactly how the many threads exposed by these two books will come together and help us to confront the daunting array of challenges we are facing today, including environmental change, job loss through automation, and the failure to understand the harms from new technologies (like social media platforms, artificial intelligence and genetic engineering) before they start harming us.  Still, I think at least two of their ideas will be critical in the days ahead.

The first is our need to be skeptical of the bubbles that limit every elites’ perspective, becoming more knowledgeable as individuals and citizens about the problems we face and their possible solutions. It is resisting the temptation to give over that basic responsibility to people or companies that keep telling us they are smarter, wiser or more successful than we are and that all we have to do is to trust them given all of the wonderful things they are doing for us. We need to identify our shared dreams and figure out how to realize them instead of giving that job to somebody else.

The second idea involves harnessing America’s frontier spirit one more time. There is something about us “as a people” that is motivated by the pursuit of practical, one-foot-in-front-of–the-other objectives (instead of ideological ones) and that trusts in our ability to claim the future that we want. Given Kaplan’s insights and Giridharadas’ concerns, we need political problem-solving on the technological frontier in the same way that we once came together to tame the American West. It’s where rugged individualism joins forces with other Americans who are confronting similar challenges.

I hope that you’ll get an opportunity to dig into these books, that you enjoy them as much as I am, and that you’ll let me know what you think about them when you do.

This post was adapted from my October 21, 2018 newsletter.

Filed Under: *All Posts, Being Part of Something Bigger than Yourself, Entrepreneurship Tagged With: Anand Giridharadas, frontier, future of work, Robert D. Kaplan, rugged individualism, Silicon Valley, tech, tech entrepreneurs, technology

How Stepping Back and Really Noticing Can Change Everything

October 14, 2018 By David Griesing Leave a Comment

Pieter Bruegel’s The Battle Between Carnival and Lent

I’m frequently reminded about how oblivious I am, but I had a particularly strong reminder recently. I was in a room with around 30 other people watching a documentary that we’d be discussing when it was over. Because we’d all have a chance to share our strongest impressions and it was a group I cared about, I paid particularly close attention. I even jotted down notes from time to time as something hit me. After the highly emotional end, I led off with my four strongest reactions and then listened for the next half hour while the others described what excited or troubled them. Most startling was how many of their observations I’d missed altogether.

Some of the differences were understandable, why single “eye witness accounts” are often unreliable and we want at least 8 or 12 people on a jury to be sharing their observations during deliberations. No one catches everything, even when you’re watching closely and trying to be insightful later on. Still, I thought I was better at this.

Missing key details and reaching the wrong (or woefully incomplete) conclusions affects much of our work and many of our relationships outside of it. Emotion blinds us. Fear inhibits us from looking long and hard enough. Bias makes us see what we want to see instead of what’s truly there. To get better at noticing involves acknowledging each of these tendencies and making the effort to override them. In other words, it involves putting as little interference as possible between us and what’s staring us in the face.

As luck would have it, a couple of interactive challenges involving our perceptive abilities crossed my transom this week. Given how much I missed in the documentary, I decided to play with both of them to see if looking without prior agendas or other distractions actually improved my ability to notice what’s in front of me. It was also a nice way to take a break from our 24-7 free-for-all in politics. As I sat down to write to you, I thought you might enjoy a brief escape into “how much you’re noticing” too.

The Pieter Bruegel painting above–called “The Battle Between Carnival and Lent”–is currently part of the largest-ever exhibition of the artist’s work at Vienna’s Kunsthistoriches Museum. Bruegel is a giant among Northern Renaissance painters but most of his canvases are in Europe so too few of us have actually seen one, and when we have, they’ve been in books where it’s all but impossible to see what’s actually going on in them. As it turns out, we’ve been missing quite a lot.

Conveniently, the current survey of the artist’s work includes a website that’s devoted to “taking a closer look,” including how Bruegel viewed one of the great moral divides of his time:  between the anything goes spirit of Carnival (the traditional festival for ending the winter and welcoming the spring) and the tie-everything-down season of Lent (the interval of Christian fasting and penance before Good Friday and Easter). “The Battle Between Carnival and Lent” is a feast for noticing, and we’ll savor some of the highlights on its menu below.

First though, before this week I’d never heard about people who are known as “super recognizers.” They’re a very small group of men and women who can see a face (or the photo of one) and, even years later, pick that face out of a crowd with startling speed and accuracy. It’s not extraordinary memory but an entirely different way of reading and later recognizing a stranger’s face.

I heard one of these super recognizers being interviewed this week about his time tracking down suspects and missing persons for Scotland Yard. His pride at bringing a remarkable skill to a valuable use was palpable–the pure joy of finding needles in a succession of haystacks. His interviewer also talked about a link to an on-line exercise for listeners to discover whether they too might be super recognizers. In other words, you can find out how good you are “with faces” and how well you stack up with your peers at recognizing them later on by testing your noticing skills here.  Please let me know whether I’ve helped you to find a new and, from all indications, highly rewarding career. (The test’s administrators will be following up with you if you make the grade.)

Now back to Bruegel.

You can locate this central scene in “The Battle Between Carnival and Lent” in the lower middle range of the painting. Zooming in on it also reveals Bruegel’s greatest innovation as a painter. He gives us a birds-eye view of the full pageant of life that embraces his theme. It’s not the entire picture of “what it was like” in a Flemish town 500 years ago, but viewers had never before been able to get this close to “that much of it” before.

It’s also a canvas populated by peasants and merchants as opposed to saints and nobles. They are alone or in small groups, engaged in their own distinct activities while seemingly ignoring everyone else. In the profusion of life, it’s as if we dropped into the center of any city during lunch hour to eavesdrop.

The painting’s details show a figure representing Carnival on the left. He’s fat, riding a beer barrel and wearing a meat pie as a headdress. Clearly a butcher—from the profession that enabled much of the festival’s feasting—he holds a long spit with a roasted pig as his weapon for the battle to come. Lent, on the other hand, is a grim and gaunt male figure dressed like a nun, sitting on a cart drawn by a monk and real nun. The wagon holds traditional Lenten foods like pretzels, waffles and mussels, and Lent’s weapon of choice is an oven paddle holding a couple of fish, an apparent allusion to the parable of Jesus multiplying the loaves and the fishes for a hungry crowd. On one level then, the fight is over what we should eat at this time of year.

As the eye wanders beyond the comic joust, Carnival’s vicinity includes a tavern filled with revelers, on-lookers watching a popular farce called “The Dirty Bride” (that’s surely worth a closer look!) and a procession of lepers led by a bagpiper. On the other hand, Lent’s immediate orbit shows townsfolk drawing water from the well, giving alms to the poor and going to church (their airs of generosity equally worthy of closer attention).

Not unlike our divided society today, Bruegel painted while the battle for souls during the Reformation was on-going, but instead of taking sides, this painting seems to take an equal opportunity to mock hypocrisy, greed and gluttony wherever he found it, making this and others of his paintings among the first images of social protest since Romans scrawled graffiti on public walls 1200 years before. While earlier paintings by other artists carefully disguised any humor, Bruegel wants you to laugh with him at this spectacle of human folly.

It’s been argued that Bruegel also brings a more serious purpose to his light heartedness, criticizing the common folk by personifying them as a married couple guided by a fool with a burning torch—an image that can be found in almost in the exact center of the painting. The way they are being led suggests that they follow their distractions and baser instincts instead of reason and good judgment. Reinforcing the message is a rutting pig immediately below them (you can find more of him later), symbolizing the destruction that oblivious distraction can leave in its wake.

Everywhere else Bruegel invites his viewers to draw their own conclusions. You can follow this link and notice for yourself the remarkable details of this painting along with others by the artist.  Navigate the way that you would on a Google Map, by clicking the magnifying glass (+) or (-) to zoom in and out, while dragging your cursor to move around the canvas. Be sure to let me know whether you happen upon any of the following during your exploration (the circle dance, the strangely-clad gamblers with their edible game board, the man emptying a bucket on the head of a drunk) and whether you think Carnival or Lent seems to have won the battle.

Before wishing you a good week, I have a final recommendation that brings what we notice (say in a work of art) back to what we notice or fail to notice about one another every day.

The movie Museum Hours is about the relationship that develops between an older man and woman shortly after they meet. Johann used to be a road manager for a hard-rock band but now is a security guard at the same museum in Vienna that houses the Bruegel paintings. Anne has traveled from Canada to visit a cousin who’s been hospitalized and meets Johann as she traverses a strange city. During her visit, he becomes her interpreter, advocate for her cousin’s medical care, and eventually her tour guide.  But just as he finds “the spectacle of spectatorship” at the museum “endlessly interesting” as he takes it in everyday, they both find the observations that they make about one another in the city’s coffee shops and bistros surprising and comforting.

Museum Hours is a movie about the rich details that are often overlooked in our exchanges with one another and that a super observer like Bruegel brings to his examination of everyday life. One of the film’s many reveals takes place in a scene between a tour guide at the museum (who is full of her own insights) and a group of visitors with their unvarnished interpretations in front of  “The Battle Between Carnival and Lent” and other Bruegel paintings. You can view that film clip here, and ask yourself whether the guide is helping the visitors to see what is in front of them or diverting their attention away from it.

As we shuttle between two adults in deepening conversation and very different kinds of exchanges across Vienna, Museum Hours asks several questions, including what any of us hopes to gain from looking at famous paintings on the walls of a museum. As one of the movie’s reviewers wondered:

“Is it to look at fancy paintings and feel cultured, or is it to experience something more direct: to dare to unsheathe oneself of one’s expectations and inhibitions, and truly embrace what a work of art can offer? And then, how could one carry that open mindset to embrace all of life itself? With patient attention and quiet devotion, these are challenges that this film dares to tackle.”

That much open-mindedness is a heady prescription, and probably impossible to manage. But sometimes it’s good to be reminded about how much we’re missing, to remove at least some of our blinders, and to discover what we can still manage to notice when we try.

Note: this post was adapted from my October 14, 2018 Newsletter.

Filed Under: *All Posts, Continuous Learning, Daily Preparation, Using Humor Effectively, Work & Life Rewards Tagged With: bias, Bruegel, distraction, Museum Hours, noticing, perception, seeing clearly, skill of noticing, super recognizers, the Battle Between Carnival and Lent

There’s Bodily Harm in Our Moral Divisions

October 7, 2018 By David Griesing 2 Comments

Matisse cutouts as cover art for the Verve Review

On Wednesday, I met with my group of 10 and 11-year olds who have recently lost parents or other caregivers. We gathered in small groups, huddling around a sheet that the staff provided. It was a simple page with the outline of a person on it. The kids’ aim was simple too. To write words associated with grief (like sad, angry, relieved, frustrated, lonely) on the part of the body where you experience those feelings most strongly. Surprisingly, one of the suggested words was “happy,” and B who was sitting next to me wrote it on one of her legs. I asked why and she said: “Because my mom is in a better place now and that makes me feel like dancing.”

The boys in the group were more interested than the girls in making the line drawings look like them. Superheroes are a big part of their worlds and larger-than-life always leaks into their self-image.  Two boys insisted that they “couldn’t put their words in” until “they looked right.” One accomplished his personalizing simply by drawing the line of a cape behind his figure. One got busy coloring all the necessary details. Another took the relationship between the empty figure and what he feels about grief more literally. He drew in what looked like a central nervous system, a kind of “everywhere feeling” that was demonstrated by ganglia and synapses instead of words.

Presenting each kid with a physical container to fill up with their feelings did even more to stimulate conversation about what they’re feeling today.  The kids are rarely this talkative about anything so heart-felt and personal.

One reason they come to the center is to spend time with other kids their age who have suffered the same kinds of loss. Just knowing you can talk about what you’ve gone through with somebody is its own relief, making this a refuge from all the other places where no one seems to know or be that interested. While their shared experience of death and its aftermath is what brings them together, most of the time few if any get around to talking about it. But today is different…

Maybe because it has them looking at their feelings, this line drawing launches a blizzard of conversation that’s so pent-up and overlapping it’s impossible to catch everything they’re are saying about themselves or to one another. Maybe because their focus was the grieving body, their conversation also got physical pretty fast.

R told a story about his dream last night where his mom “appeared” in order to “warn” him about some threat I couldn’t catch because he’d already launched into a dance move from Fornite that H, who was sitting across from him in her hijab, immediately tried to show everybody too—R shamelessly insisting all the while that his moves were a lot better than her’s.  The gusher of disclosures and the physicality that punctuated them brought us someplace that was even more elemental than the word choices around grief that we’d started with.

They all started talking about how “unsafe” and “afraid” they feel almost all the time, and it was not just because they’re trying to cope with a terrible loss in their lives. Every one of them felt that the outside world is elbowing itself into their lives and unsettling them to the core almost every single day.

They literally fell over one another talking about people they knew who had been shot, how the police never seemed to be around when you needed them, how much they worried about white police officers hurting black people, about conflict in the news, name-calling and blame, about how America’s leaders aren’t interested in protecting them either, and why “everybody’s fighting” makes them think that they need to lash out too to in order be heard.

Deaths in the family had put these kids one-step closer to everything else in the world that could harm them. Key insulation between them and the street was lost when their parents succumbed to illness and violence. It caused them to fear their conflict-ridden country. A dance move from Fortnite is all that tells them that they’ve fended off the threat, or at least the video game version of it.

The only time that I felt unsafe in America was in the wake of 9/11. Her mother and I protected Emily (a similar age to these kids at the time) from our anxiety, from the continuous media coverage, from our endless conversations about it, although I suppose it was hard to hide our alarm and what must have seemed like everyone else’s alarm from a child who was that perceptive. On Wednesday, these kids were describing that same kind of fear, with too few (if anyone) providing enough of a buffer between them and the anxiety that keeps “getting in” from “out there.”

In looking into their faces this week, I recalled that many of the most unequal and segregated places in America also happen to be the most “liberal” and “progressive”—and that Philadelphia where we meet, is even more so on both accounts. (See Richard Florida’s The New Urban Crisis.) Philadelphia tells the rest of the country that it’s a sanctuary city for immigrants and refuges, but seems like quite the opposite to kids like mine who were born here. It wasn’t bad politicians from somewhere else who created the mess in this city or have failed to clean it up, but you can’t tell that from the outward pointing anger of our politics.. These kids internalize some of that too.

This week, a new study was released with the finding that the second biggest predictor of a child’s life outcomes (after family income) was whether he or she came from a single parent household. The study findings were profiled in the New York Times, and by using the study’s toolbox you can key into neighborhood maps across America to predict the likely outcome for a child who is born there. The study’s authors collected data points from people born in the early 1980s and followed them into their 30s.  In an interesting coincidence, the Time’s profile begins with a map that includes the East Falls neighborhood where I live and meet with my kids after school. Of course, some of them would be lucky to have even a single parent in their lives.

Vulnerable kids are also impacted by the fear and anxiety that raises their stress levels.  A different kind of long-range study (this time in genetics) has been finding that a child’s environment not only causes certain genes to be expressed while others remain dormant, but can also change how the expressed genes work.

For example, these researchers have discovered that high levels of personal stress actually alter the genes that help regulate our responses to environmental pressures. Study participants who were abused or neglected as children were found to have different cells around this stress-regulating gene than children who were more protected and nurtured.  In other words, stress changed these kids ability to respond to the world around them at a cellular level. (Some of those findings are summarized here.) Related research has been focusing on whether altered genes in stressed-out children can, in turn, be passed on to their own children. At the very least, damage from the stress of “what’s happening around us” can cause lasting if not permanent bodily harm.

This week, as I sat with my kids and listened to their fears, I couldn’t help but think back to my feelings and Emily’s after 9/11 and about how the two of us would feel about America today if we could both be that much younger again. The fearful kids who danced with false bravado on Wednesday were all born a decade ago in the wake of 9/11 and into the poisonous politics of the Iraq War. They rode the downward spiral of economic recession with fewer resources, and now see a kind of naked hostility at nearly every level of public exchange. “A brutish and short” America is the only one they have ever known and their anxiety in the face of it is palpable.

My kids’ life prospects are already poor. Their genes may already be altered forever by stress. I wonder: are they merely at the extreme end of vulnerability in America, or are they also canaries in an increasingly toxic coalmine, telegraphing to us with their bodies that we should also be afraid.

+ + +

It was great to hear from several of you after recent posts about defending your reputation “as a good person” when it’s challenged. Thanks, as always, for reaching out and letting me know what you think.

 

This post was adapted from my October 7, 2018 newsletter.

Filed Under: *All Posts, Being Part of Something Bigger than Yourself Tagged With: economic divisions, epigenetic, genetics, life outcomes, loss of parent, parent and child, political divisions, social anxiety, social divisions, stress

  • « Previous Page
  • 1
  • …
  • 21
  • 22
  • 23
  • 24
  • 25
  • …
  • 48
  • Next Page »

About David

David Griesing (@worklifeward) writes from Philadelphia.

Read More →

Subscribe to my Newsletter

Join all the others who have new posts, recommendations and links to explore delivered to their inboxes every week. Please subscribe below.

David Griesing Twitter @worklifereward

My Forthcoming Book

WordLifeReward Book

Search this Site

Recent Posts

  • Will AI Make Us Think Less or Think Better? July 26, 2025
  • The Democrat’s Near-Fatal “Boys & Men” Problem June 30, 2025
  • Great Design Invites Delight, Awe June 4, 2025
  • Liberating Trump’s Good Instincts From the Rest April 21, 2025
  • Delivering the American Dream More Reliably March 30, 2025

Follow Me

David Griesing Twitter @worklifereward

Copyright © 2025 David Griesing. All Rights Reserved.

  • Terms of Use
  • Privacy Policy