David Griesing | Work Life Reward Author | Philadelphia

  • Blog
  • About
    • Biography
    • Teaching and Training
  • Book
    • WorkLifeReward
  • Newsletter Archive
  • Contact
You are here: Home / Archives for Continuous Learning

Running Into the Future of Work

January 13, 2019 By David Griesing Leave a Comment

We’ve just entered a new year and it’s likely that many of us are thinking about the opportunities and challenges we’ll be facing in the work weeks ahead. Accordingly, it seems a good time to consider what lies ahead with some forward-thinkers who’ve also been busy looking into the future of our work.
 
In an end-of-the-year article in Forbes called “Re-Humanizing Work: You, AI and the Wisdom of Elders,” Adi Gaskell links us up with three provocative speeches about where our work is headed and what we might do to prepare for it.  As he’s eager to tell us, his perspective on the people we need to be listening to is exactly where it needs to be:
 
“I am a free range human who believes that the future already exists, if we know where to look. From the bustling Knowledge Quarter in London, it is my mission in life to hunt down those things and bring them to a wider audience. I am an innovation consultant and writer, and…my posts will hopefully bring you complex topics in an easy to understand form that will allow you to bring fresh insights to your work, and maybe even your life.”
 
I’ve involuntarily enlisted this “free-range human” as my guest curator for this week’s post. 
 
In his December article, Gaskell profiles speeches that were given fairly recently by John Hagel, co-chair of Deloitte’s innovation center speaking at a Singularity University summit in Germany; Nobel Prize-winning economist Joseph Stiglitz speaking at the Royal Society in London; and Chip Conley an entrepreneur and self-proclaimed “disrupter” speaking to employees at Google’s headquarters last October. In the discussion that follows, I’ll provide video links to their speeches so you can consider what they have to say for yourselves along with “my take-aways” from some of their advice. 
 
We are all running into the future of our work. As the picture above suggests, some are confidently in the lead while others of us (like that poor kid in the red shirt) may simply be struggling to keep up. It will be a time of tremendous change, risk and opportunity and it won’t be an easy run for any of us. 
 
My conviction is that forward movement at work is always steadier when you are clear about your values, ground your priorities in your actions, and remain aware of the choices (including the mistakes) that you’re making along the way. Hagel, Stiglitz and Conley are all talking about what they feel are the next necessary steps along this value-driven path.

1.         The Future of Work– August 2017

When John Hagel spoke about the future of work at a German technology summit, he was right to say that most people are gripped by fear. We’re “in the bulls-eye of technology” and paralyzed by the likelihood that our jobs will either be eliminated or change so quickly that we will be unable to hold onto them. However, Hagel goes on to argue (persuasively I think) that the same machines that could replace or reduce our work roles could just as likely become “the catalysts to help us restore our humanity.”  
 
For Hagel, our fears about job elimination and the inability of most workers to avoid this looming joblessness are entirely justified.  That’s because today’s economy—and most of our work—is aimed at producing what he calls “scalable efficiency.”  This economic model relentlessly drives the consolidation of companies while replacing custom tasks with standardized ones wherever possible for the sake of the bottom line.
 
Because machines can do nearly everything more efficiently than humans can, our concerns about being replaced by robots and the algorithms that guide them are entirely warranted. And it is not just lower skilled jobs like truckers that will be eliminated en masse. Take a profession like radiology. Machines can already assess the data on x-rays more reliably than radiologists. More tasks that are performed by professionals today will also be performed by machines tomorrow. 
 
Hagel notes that uniquely human aptitudes like curiosity, creativity, imagination, and emotional intelligence are discouraged in a world of scalable efficiency but (of course) it is in this direction that humans will be most indispensible in the future of work. How do we build the jobs of the future around these aptitudes, and do we even want to?
 
There is a long-standing presumption that most workers don’t want to be curious, creative or imaginative problem-solvers on the job. We’ve presumed that most workers want nothing more than a highly predictable workday with a reliable paycheck at the end of it. But Hagel asks, is this really all we want, or have our educations conditioned us to fit (like replaceable cogs) into an economy that’s based on the scalable efficiency of its workforce? He argues that if you go to any playground and look at how pre-schoolers play, you will see the native curiosity,  imagination and inventiveness before it has been bred out of them by their secondary, college and graduate school educations. 
 
So how do companies reconnect us to these deeply human aptitudes that will be most valued in the future of work? Hagel correctly notes that business will never make the massive investment in workforce retraining that will be necessary to recover and re-ignite these problem-solving skills in every worker. Moreover, the drive for scalable efficiency and cost-cutting in most companies will overwhelm whatever initiatives do manage to make it into the re-training room. 
 
Hagel’s alternative roadmap is for companies that are committed to their human workforce to invest in what he calls “the scalable edges” of their business models. These are the discrete parts of any business that have “the potential to become the new core of the institution”—that area where a company is most likely to evolve successfully in the future. Targeted investments in a problem-solving human workforce at these “scalable edges” today will produce a problem-solving workforce that can grow to encompass the entire company tomorrow.

By focusing on worker retraining at a company’s most promising “edges,” Hagel strategically identifies a way to counter the “scalable efficiency” models that will continue to eliminate jobs but refuse to make the investment that’s required to retrain everyone. While traditional jobs will continue to be lost during this transition, and millions of employees will still lose their jobs, Hagel’s approach ensures an eventual future that is powered by human jobs that machines cannot do today and may never be able to do. For him, it’s the fear of machines that drives us to a new business model that re-engages the humanity that we lost in school in the workplace.
 
I urge you to consider the flow of Hagel’s arguments for yourself. For more of his ideas, a prior newsletter discusses a Harvard Business Review article (which he co-wrote with John Seely Brown) about the benefits of learning that can “scale up.” A closely related post that examines Brown’s commencement address about navigating “the white-water world of work today” can be found here.
 
*My most important take-aways from Hagel’s talk: Find the most promising, scalable edges of the jobs Im doing.  Hone the creative, problem-solving skills that will help me the most in realizing the goals I have set for myself in those jobs. Maintain my continuing value in the workplace by nurturing the skills that machines can never replace.

2.         AI and Us– September 2018

Columbia University economist Joseph Stiglitz begins his talk at London’s Royal Society with three propositions. The first is that artificial intelligence and machine learning are likely to change the labor market in an unprecedented way because of the sheer extent of their disruption. His second proposition is that economic markets do not self-correct in a way that either preserves employment or creates new jobs down the road. His third proposition—and perhaps the most important one—is that there is an inherent “dignity to work” that necessitates government policies that enable everyone who wants to work to have the opportunity to do so.
 
I agree with each of these propositions, particularly his last one. So if you asked me, the way that Stiglitz was asked by a member of the audience at the end of his talk, about whether he supported governments providing their citizens with “a universal basic income” to offset job elimination as many progressives are proposing, his answer (and mine) would “No.” Instead, we’d argue that governments should be fostering the economic circumstances where everyone who wants to work has the opportunity to do so. It is this opportunity to be productive—and not a new government handout—that rises to the level of basic human right.
 
Stiglitz argues that new artificial intelligence technologies along with 50 years of hands-off government policies about regulating business (beginning with Reagan in the US and Thatcher in the UK) have been creating smaller “national pies” that are shared with fewer of their citizens.  In a series of charts, he documents the rise of income inequality by showing how wages and economic productivity rose together in most Western economies until the 1980s and have diverged ever since. Labor’s share in the pie has consistently decreased in this timeframe and new technologies like AI are likely to reduce it to even more worrisome levels.
 
Stiglitz’ proposed solutions include policy making that encourages full employment in addition to fending off inflation, reducing the monopoly power that many businesses enjoy because monopoly restricts the flow of labor, and enacting rules that strengthen workers’ collective bargaining power. 
 
Stiglitz is not a spellbinding speaker, but he is imminently qualified to speak about how the structure of the economy and the policies that maintain it affect the labor markets. You can follow his trains of thought right into the lively Q&A that follows his remarks via the link above. For my part, I’ve been having a continuous conversation about the monopoly power of tech companies like Amazon and the impact of unrestricted power on jobs in newsletter posts like this one from last April as well as on Twitter if you are interested in diving further into the issue.    
 
*My most important take-aways from Stiglitz’ remarks were as follows: since I care deeply about the dignity that work confers, I need (1) to be involved in the political process; (2) to identify and argue in favor of policies that support workers and, in particular, every worker’s opportunity to have a job if she wants one; and (3) to support politicians who advance these policies and oppose those who erroneously claim that when business profits, it follows that we all do.

3.         The Making of a Modern Elder – October 2018
 
The pictures above suggest the run we’re all on towards the future of work. What these pictures don’t convey as accurately are the ages of the runners. This race includes everyone who either wants or needs to keep working into the future.
 
Chip Conley’s recent speech at Google headquarters is about how a rapidly aging demographic is disrupting the future workforce and how both businesses and younger workers stand to benefit from it. For the first time in American history, there are more people over age 65 than under age 15. With a markedly different perspective, Conley discusses several of the opportunities for companies when their employees work longer as well as how to improve the intergenerational dynamics when as many as five different generations are working together in the same workplace.
 
Many of Conley’s insights come from his mentoring of Brian Chesky, the founder of AirBnB, and how he brought what he came to call “elder wisdom” to not only Chesky but also AirBnB’s youthful workforce. Conley begins his talk by referencing our long-standing belief that work teams with gender and race diversity tend to be more successful than less diverse teams, which has led companies to support them. However, Conley notes that only 8% of these same companies actively support age diversity.
 
To enlist that support, he argues that age diversity adds tremendous value at a time of innovation and rapid change because older workers have both perspective and organizational abilities that younger workers lack. Moreover, these older workers comprise an increasingly numerous group, anywhere from age 35 at some Silicon Valley companies to age 75 and beyond in less entrepreneurial industries. What “value” do these older workers provide, and how do you get employers to recognize it?
 
Part of the answer comes from a changing career path that no longer begins with learning, peaks with earning, and concludes with retirement. For nearly all workers, your ability to evolve, learn, collaborate and counsel others play roles that are continuously being renegotiated throughout your career. For example, as workers age, they may bring new kinds of value by sharing their institutional knowledge with the group, by understanding less of the technical information but more about how to help the group become more productive, and by asking “why” or “what if” questions instead of “how” or simply “what do we do now” in group discussions. Among other things, that is because older workers spend the first half of their careers accumulating knowledge, skills and experience and the second half editing what they have accumulated (namely what is more and less important) given the perspective they have gained.  
 
When you listen to Conley’s talk, make sure that you stay tuned until the Q&A, which includes some of his strongest insights.
 
*My most important take-aways from his remarks all involve how older workers can continuously establish their value in the workplace. To do so, older workers must (1) right-size their egos about what they don’t know while maintaining confidence in the wisdom they have to offer; (2) commit to continuous learning instead of being content with what they already know; (3) become more interested and curious instead of assuming that either their age or experience alone will make them interesting; and (4) demonstrate their curiosity publically, listen carefully to where those around them are coming from, and become generous at sharing their wisdom with co-workers privately.  When we do, companies along with their younger workers will come to value their trusted elders.

* * *

 This has been a wide-ranging discussion. I hope it has given you some framing devices to think about your jobs as an increasingly disruptive future rushes in your direction. We are all running with the wind in our faces while trying to get the lay of the land below our feet in this brave new world of work.

Note: this post is adapted from my January 13, 2019 newsletter.

Filed Under: *All Posts, Continuous Learning, Entrepreneurship Tagged With: aging workforce, Ai, artificial intelligence, Chip Conley, dignity of work, elder wisdom, future of work, John Hagel, Joseph Stiglitz, labor markets, machine learning, monopoly power, value of older workers, work, workforce disruption, workforce retraining

These Tech Platforms Threaten Our Freedom

December 9, 2018 By David Griesing Leave a Comment

We’re being led by the nose about what to think, buy, do next, or remember about what we’ve already seen or done.  Oh, and how we’re supposed to be happy, what we like and don’t like, what’s wrong with our generation, why we work. We’re being led to conclusions about a thousand different things and don’t even know it.

The image that captures the erosion of our free thinking by influence peddlers is the frog in the saucepan. The heat is on, the water’s getting warmer, and by the time it’s boiling it’s too late for her to climb back out. Boiled frog, preceded by pleasantly warm and oblivious frog, captures the critical path pretty well. But instead of slow cooking, it’s shorter and shorter attention spans, the slow retreat of perspective and critical thought, and the final loss of freedom.

We’ve been letting the control booths behind the technology reduce the free exercise of our lives and work and we’re barely aware of it. The problem, of course, is that the grounding for good work and a good life is having the autonomy to decide what is good for us.

This kind of tech-enabled domination is hardly a new concern, but we’re wrong in thinking that it remains in the realm of science fiction.

An authority’s struggle to control our feelings, thoughts and decisions was the theme of George Orwell’s 1984, which was written 55 years before the fateful year that he envisioned. “Power,” said Orwell, “is in tearing human minds to pieces and putting them together again in new shapes of your own choosing.” Power persuades you to buy something when you don’t want or need it. It convinces you about this candidate’s, that party’s or some country’s evil motivations. It tricks you into accepting someone else’s motivations as your own. In 1984, free wills were weakened and constrained until they were no longer free. “If you want a picture of the future,” Orwell wrote, “imagine a boot stamping on a human face—for ever.”

Maybe this reflection of the present seems too extreme to you.

After all, Orwell’s jackbooted fascists and communists were defeated by our Enlightenment values. Didn’t the first President Bush, whom we buried this week, preside over some of it? The authoritarians were down and seemed out in the last decade of the last century—Freedom Finally Won!—which just happened to be the very same span of years when new technologies and communication platforms began to enable the next generation of dominators.

(There is no true victory over one man’s will to deprive another of his freedom, only a truce until the next assault begins.)

20 years later, in his book Who Owns the Future (2013), Jaron Lanier argued that a new battle for freedom must be fought against powerful corporations fueled by advertisers and other “influencers” who are obsessed with directing our thoughts today.

In exchange for “free” information from Google, “free” networking from Facebook, and “free” deliveries from Amazon, we open our minds to what Lanier calls “siren servers,” the cloud computing networks that drive much of the internet’s traffic. Machine-driven algorithms collect data about who we are to convince us to buy products, judge candidates for public office, or determine how the majority in a country like Myanmar should deal with a minority like the Rohingya.

Companies, governments, groups with good and bad motivations use our data to influence our future buying and other decisions on technology platforms that didn’t even exist when the first George Bush was president but now, only a few years later, seem indispensible to nearly all of our commerce and communication. Says Lanier:

When you are wearing sensors on your body all the time, such as the GPS and camera on your smartphone and constantly piping data to a megacomputer owned by a corporation that is paid by ‘advertisers” to subtly manipulate you…you are gradually becoming less free.

And all the while we were blissfully unaware that this was happening because the bath was so convenient and the water inside it seemed so warm. Franklin Foer, who addresses tech issues in The Atlantic and wrote 2017’s World Without Mind: The Existential Threat of Big Tech, talks about this calculated seduction in an interview he gave this week:

Facebook and Google [and Amazon] are constantly organizing things in ways in which we’re not really cognizant, and we’re not even taught to be cognizant, and most people aren’t… Our data is this cartography of the inside of our psyche. They know our weaknesses, and they know the things that give us pleasure and the things that cause us anxiety and anger. They use that information in order to keep us addicted. That makes [these] companies the enemies of independent thought.

The poor frog never understood that accepting all these “free” invitations to the saucepan meant that her freedom to climb back out was gradually being taken away from her.

Of course, we know that nothing is truly free of charge, with no strings attached. But appreciating the danger in these data driven exchanges—and being alert to the persuasive tools that are being arrayed against us—are not the only wake-up calls that seem necessary today. We also can (and should) confront two other tendencies that undermine our autonomy while we’re bombarded with too much information from too many different directions. They are our confirmation bias and what’s been called our illusion of explanatory depth.

Confirmation bias leads us to stop gathering information when the evidence we’ve gathered so far confirms the views (or biases) that we would like to be true. In other words, we ignore or reject new information, maintaining an echo chamber of sorts around what we’d prefer to believe. This kind of mindset is the opposite of self-confidence, because all we’re truly interested in doing outside ourselves is searching for evidence to shore up our egos.

Of course, the thought controllers know about our propensity for confirmation bias and seek to exploit it, particularly when we’re overwhelmed by too many opposing facts, have too little time to process the information, and long for simple black and white truths. Manipulators and other influencers have also learned from social science that our reduced attention spans are easily tricked by the illusion of explanatory depth, or our belief that we understand things far better than we actually do.

The illusion that we know more than we think we do extends to anything that we can misunderstand. It comes about because we consume knowledge widely but not deeply, and since that is rarely enough for understanding, our same egos claim that we know more than we actually do. For example, we all know that ignorant people are the most over-confident in their knowledge, but how easily we delude ourselves about the majesty of our own ignorance.  For example, I regularly ask people questions about all sorts of things that they might know about. It’s almost the end of the year as I write this and I can count on one hand the number of them who have responded to my questions by saying “I don’t know” over the past twelve months.  Most have no idea how little understanding they bring to whatever they’re talking about. It’s simply more comforting to pretend that we have all of this confusing information fully processed and under control.

Luckily, for confirmation bias or the illusion of explanatory depth, the cure is as simple as finding a skeptic and putting him on the other side of the conversation so he will hear us out and respond to or challenge whatever it is that we’re saying. When our egos are strong enough for that kind of exchange, we have an opportunity to explain our understanding of the subject at hand. If, as often happens, the effort of explaining reveals how little we actually know, we are almost forced to become more modest about our knowledge and less confirming of the biases that have taken hold of us.  A true conversation like this can migrate from a polarizing battle of certainties into an opportunity to discover what we might learn from one another.

The more that we admit to ourselves and to others what we don’t know, the more likely we are to want to fill in the blanks. Instead of false certainties and bravado, curiosity takes over—and it feels liberating precisely because becoming well-rounded in our understanding is a well-spring of autonomy.

When we open ourselves like this instead of remaining closed, we’re less receptive to, and far better able to resist, the “siren servers” that would manipulate our thoughts and emotions by playing to our biases and illusions. When we engage in conversation, we also realize that devices like our cell phones and platforms like our social networks are, in Foer’s words, actually “enemies of contemplation” which are” preventing us from thinking.”

Lanier describes the shift from this shallow tech-driven stimulus/response to a deeper assertion of personal freedom in a profile that was written about him in the New Yorker a few years back.  Before he started speaking at a South-by-Southwest Interactive conference, Lanier asked his audience not to blog, text or tweet while he spoke. He later wrote that his message to the crowd had been:

If you listen first, and write later, then whatever you write will have had time to filter through your brain, and you’ll be in what you say. This is what makes you exist. If you are only a reflector of information, are you really there?

Lanier makes two essential points about autonomy in this remark. Instead of processing on the fly, where the dangers of bias and illusions of understanding are rampant, allow what is happening “to filter through your brain,” because when it does, there is a far better chance that whoever you really are, whatever you truly understand, will be “in” what you ultimately have to say.

His other point is about what you risk becoming if you fail to claim a space for your freedom to assert itself in your lives and work. When you’re reduced to “a reflector of information,” are you there at all anymore or merely reflecting the reality that somebody else wants you to have?

We all have a better chance of being contented and sustained in our lives and work when we’re expressing our freedom, but it’s gotten a lot more difficult to exercise it given the dominant platforms that we’re relying upon for our information and communications today.

This post was adapted from my December 9, 2018 newsletter.

Filed Under: *All Posts, Building Your Values into Your Work, Continuous Learning, Work & Life Rewards Tagged With: Amazon, autonomy, communication, confirmation bias, facebook, Franklin Foer, free thinking, freedom, Google, illusion of explanatory depth, information, information overhoad, Jaron Lanier, tech, tech platforms, technology

Choosing a Future For Self-Driving Cars

November 5, 2018 By David Griesing Leave a Comment

It looks pretty fine, doesn’t it?

You’ll no longer need a car of your own because this cozy little pod will come whenever you need it. All you’ll have to do is to buy a 30- or 60-ride plan and press “Come on over and get me” on your phone.

You can’t believe how happy you’ll be to have those buying or leasing, gas, insurance and repair bills behind you, or to no longer have a monstrosity taking up space and waiting for those limited times when you’ll actually be driving it. Now you’ll be able to get where you need to go without any of the “sunk costs” because it’ll be “pay as you go.”

And go you will. This little pod will transport you to work or the store, to pick up your daughter or your dog while you kick back in comfort. It will be an always on-call servant that lets you stay home all weekend, while it delivers your groceries and take-out orders, or brings you a library book or your lawn mower from the repair shop. You’ll be the equivalent of an Amazon Prime customer who can have nearly every material need met wherever you are—but instead of “same day service,” you might have products and services at your fingertips in minutes because one of these little pods will always be hovering around to bring them to you. Talk about immediate gratification!

They will also drive you to work and be there whenever you need a ride home. In a fantasy commute, you’ll have time to unwind in comfort with your favorite music or by taking a nap. Having one of these pods whenever you want one will enable you to work from different locations, to have co-workers join you when you’re working from home, and to work while traveling instead of paying attention to the road. You can’t believe how much your workday will change.

Doesn’t all this money saving, comfort, convenience and freedom sound too good to be true? Well, let’s step back for a minute.

We thought Facebook’s free social network and Amazon’s cheap and convenient take on shopping were unbelieveably wonderful too—and many of us still do. So wonderful that we built them into the fabric of our lives in no time. In fact, we quickly claim new comforts, conveniences and cost savings as if we’ve been entitled to them all along. It’s only as the dazzle of these new technology platforms begin to fade into “taking them for granted” that we also begin to wonder (in whiffs of nostalgia and regret? in concerns for their unintended consequences?) about what we might have given up by accepting them in the first place.

Could it be:

-the loss of chunks of our privacy to advertisers and data-brokers who are getting better all the time at manipulating our behavior as consumers and citizens;

-the gutting of our Main Streets of brick & mortar retail, like book and hardware stores, and the attendant loss of centers-of-gravity for social interaction and commerce within communities; or

-the elimination of entry-level and lower-skilled jobs and of entire job-markets to automation and consolidation, the jobs you had as a teenager or might do again as you’re winding down, with no comparable work opportunities to replace them?

Were the efficiency, comfort and convenience of these platforms as “cost-free” as they were cracked up to be? Is Facebook’s and Amazon’s damage already done and largely beyond repair? Have tech companies like them been defining our future or have we?

Many of us already depend on ride-sharing companies like Uber and Lyft. They are the harbingers of a self-driving vehicle industry that promise to disrupt our lives and work in at least the following ways. They will largely eliminate the need to own a car. They will transform our transportation systems, impacting public transit, highways and bridges. They will streamline how goods and services are moved in terms of logistics and delivery. And in the process, they will change how the entire “built environment” of urban centers, suburbs, and outer ring communities will look and function, including where we’ll live and how we’ll work. Because we are in many ways “a car-driven culture,” self-driving vehicles will impact almost everything that we currently experience on a daily basis.

That’s why it is worth all of our thinking about this future before it arrives.

Our Future Highways

One way to help determine what the future should look like and how it should operate is to ask people—lots of them—what they’d like to see and what they’re concerned about. In fact, it’s an essential way to get public buy-in to new technology before some tech company’s idea of that future is looking us in the eye, seducing us with its charms, and hoping we won’t notice its uglier parts.

When it comes to self-driving cars, one group of researchers is seeking informed buy-in by using input from the public to influence the drafting of the decision-making algorithms behind these vehicles. In the so-called Moral Machine Experiment, these researchers asked people around the world for their preferences regarding  the moral choices that autonomous cars will be called upon to make so that this new technology can match human values as well as its developer’s profit motives.  In an article that just appeared in the journal Nature, the following remarks describe their ambitious objective.

With the rapid development of artificial intelligence have come concerns about how machines will make moral decisions and the major challenge of quantifying societal expectations about the ethical principles that should guide machine behaviour. To address this challenge we deployed the Moral Machine, an on-line experimental platform designed to explore the moral dilemmas faced by autonomous vehicles. This platform gathered 40 million decisions [involving individual moral preferences] in ten languages from millions of people in 233 different countries and territories. Here we describe the results of this experiment…

Never in the history of humanity have we allowed a machine to autonomously decide who shall live and who shall die, in a fraction of a second, without real-time supervision. We are going to cross that bridge any time now, and it will not happen in a distant theater of military operations; it will happen in the most mundane aspect of our lives, everyday transportation.  Before we allow our cars to make ethical decisions, we need to have a global conversation to express our preferences to the companies that will design moral algorithms, and to the policymakers who will regulate them.

For a sense of the moral guidance the Experiment was seeking, think of an autonomous car that is about to crash but cannot save everyone in its path. Which pre-programmed trajectory should it choose? One which injures (or kills) two elderly people while sparing a child? One which spares a pedestrian who is waiting to cross safely while injuring (or killing) a jaywalker? You see the kinds of moral quandaries we will be asking these cars to make. If peoples’ moral preferences can be taken into account beforehand, the public might be able to recognize “the human face” in a new technology from the beginning instead of having to attempt damage control once that technology is in use.

Strong Preferences, Weaker Preferences

To collect its data, the Moral Machine Experiment asked millions of global volunteers to consider accident scenarios that involved 9 different moral preferences: sparing humans (versus pets); staying on course (versus swerving); sparing passengers (versus pedestrians); sparing more lives (versus fewer lives); sparing men (versus women); sparing the young (versus the old); sparing pedestrians who cross legally (versus jaywalkers), sparing the fit (versus the less fit); and sparing those with higher social status (versus lower social status).

The challenges behind the Experiment were daunting and much of the article is about how the researchers conducted their statistical analysis. Notwithstanding these complexities, three “strong” moral preferences emerged globally, while certain “weaker” but statistically relevant preferences suggest the need for modifications in algorithmic programming among the three different “country clusters” that the Experiment identified.

The vast majority of participants in the Experiment expressed a “strong” moral preference for saving a life instead of refusing to swerve, saving as many lives as possible if an accident is imminent, and saving young lives wherever possible.

Among “weaker” preferences, there were variations among countries that clustered in the Northern (Europe and North America), Eastern (most of Asia) and Southern (including Latin America) Hemispheres. For example, the preference for sparing young (as opposed to old) lives is much less pronounced in countries in the Eastern cluster and much higher among the Southern cluster. Countries that are poorer and have weaker enforcement institutions are more tolerant than richer and more law abiding countries of people who cross the street illegally. Differences between hemispheres might result in adjustments to the decision-making algorithms of self-driving cars that are operated there.

When companies have data about what people view as “good” or “bad”, “better” or “worse” while a new technology is being developed, these preferences can improve the likelihood that moral harms will be identified and minimized beforehand.

Gridlock

Another way to help determine what the future should look like and how new technologies should operate is to listen to what today’s Cassandras are saying. Following their commentary and grappling with their concerns removes some of the dazzle in our hopes and grounds them more firmly in reality early on.

It lets us consider how, say, an autonomous car will fit into the ways that we live, work and interact with one another today—what we will lose as well as what we are likely to gain. For example, what industries will they change? How will our cities be different than they are now? Will a proliferation of these vehicles improve the quality of our interactions with one another or simply reinforce how isolated many of us are already in a car-dominated culture?

The Atlantic magazine hosts a regular podcast called “Crazy Genius” that asks “big questions” and draws “provocative conclusions about technology and culture” (Many thanks to reader Matt K for telling me about it!) You should know that these podcasts are free and can be easily accessed through services like iTunes and Spotify.

A Crazy Genius episode from September called “How Self-Driving Cars Could Ruin the American City” included interviews with two experts who are looking into the future of autonomous vehicles and are alarmed for reasons beyond these vehicles’ decision-making abilities. One is Robin Chase, the co-founder of Zipcar. The “hellscape” she forecasts involves everyone using self-driving cars as they become cheaper than current alternatives to do our errands, provide 10-minute deliveries and produce even more sedentary lifestyles than we have already, while clogging our roadways with traffic.

Without smart urban planning, the result will be infernal congestion, choking every city and requiring local governments to lay ever-more pavement down to service American automania.

Eric Avila is an historian at UCLA who sees self-driving cars in some of the same ways that he views the introduction of the interstate highway system in the 1950s. While these new highways provided autonomous access to parts of America that had not been accessible before, there was also a dark side. 48,000 miles of new highway stimulated interstate trade and expanded development but they also gutted urban neighborhoods, allowing the richest to take their tax revenues with them as they fled to the suburbs. “Mass transit systems [and] streetcar systems were systematically dismantled. There was national protest in diverse urban neighborhoods throughout the entire nation,” Avila recalls, and a similar urban upheaval may follow the explosion of autonomous vehicles.

Like highways, self-driving cars are not only cars they are also infrastructure. According to Avila, if we want to avoid past mistakes all of the stakeholders in this new technology will need to think about how they can make downtown areas more livable for humans instead of simply more efficient for these new machines. To reduce congestion, this may involve taxing autonomous vehicle use during certain times of day, limiting the number of vehicles in heavily traveled areas, regulating companies who operate fleets of self-driving cars, and capping private car ownership. Otherwise, the proliferation of cars and traffic would make most of our cities unlivable.

Once concerns like Chase’s and Avila’s are publicized, data about the public’s preferences (what’s better, what’s worse?) in these regards can be gathered just as they were in the Moral Machine Experiment. Earlier in my career, I ran a civic organization that attempted to improve the quality of Philadelphia city government by polling citizens anonymously about their priorities and concerns. While the organization did not survive the election of a reform-minded administration, information about the public’s preferences is always available when we champion the value of collecting it. All that’s necessary is sharing the potential problems and concerns that have been raised and asking people in a reliable and transparent manner how they’d prefer to address them.

In order to avoid the harms from technology platforms that we are facing today, the tech companies that are bringing us their marvels need to know far more about their intended users’ moral preferences than they seem interested in learning about today. With the right tools to be heard at our fingertips, we can all be involved in defining our futures.

This post is adapted from my November 4, 2018 newsletter.

Filed Under: *All Posts, Being Part of Something Bigger than Yourself, Continuous Learning Tagged With: autonomous vehicles, Crazy Genius podcast, ethics, future, future shock, machine ethics, Moral Machine Experiment, moral preferences, priorities, smart cars, tech, technology, values

Looking Out For the Human Side of Technology

October 28, 2018 By David Griesing Leave a Comment

Maintaining human priorities in the face of new technologies always feels like “a rearguard action.” You struggle to prevent something bad from happening even when it seems like it may be too late.

The promise of the next tool or system intoxicates us. Smart phones, social networks, gene splicing.  It’s the super-computer at our fingertips, the comfort of a boundless circle of friends, the ability to process massive amounts of data quickly or to short-cut labor intensive tasks, the opportunity to correct genetic mutations and cure disease. We’ve already accepted these promises before we pause to consider their costs—so it always feels like we’re catching up and may not have done so in time.

When you’re dazzled by possibility and the sun is in your eyes, who’s thinking “maybe I should build a fence?”

The future that’s been promised by tech giants like Facebook is not “the win-win” that we thought it was. Their primary objectives are to serve their financial interests—those of their founder-owners and other shareholders—by offering efficiency benefits like convenience and low cost to the rest of us. But as we’ve belattedly learned, they’ve taken no responsibility for the harms they’ve also caused along the way, including exploitation of our personal information, the proliferation of fake news and jeopardy to democratic processes, as I argued here last week.

Technologies that are not associated with particular companies also run with their own promise until someone gets around to checking them–a technology like artificial intelligence or AI for example. From an ethical perspective, we are usually playing catch up ball with them too. If there’s a buck to be made or a world to transform, the discipline to ask “but should we?” always seems like getting in the way of progress.

Because our lives and work are increasingly impacted, the stories this week throw additional light on the technology juggernaut that threatens to overwhem us and our “rearguard” attempts to tame it with our human concerns.

To gain a fuller appreciation of the problem regarding Facebook, a two-part Frontline doumentary will be broadcasting this week that is devoted to what one reviewer calls “the amorality” of the company’s relentless focus on adding users and compounding ad revenues while claiming to create the on-line “community” that all of us should want in the future.  (The show airs tomorrow, October 29 at 9 p.m. and on Tuesday, October 30 at 10 p.m. EST on PBS.)

Frontline’s reporting covers Russian election interference, Facebook’s role in whipping Myanmar’s Buddhists into a frenzy over its Rohingya minority, Russian interference in past and current election cycles, and how strongmen like Rodrigo Duterte in the Phillipines have been manipulating the site to achieve their political objectives. Facebook CEO Mark Zuckerberg’s limitations as a leader are explored from a number of directions, but none as compelling as his off-screen impact on the five Facebook executives who were “given” to James Jacoby (the documentary’s director, writer and producer) to answer his questions. For the reviewer:

That they come off like deer in Mr. Jacoby’s headlights is revealing in itself. Their answers are mealy-mouthed at best, and the defensive posture they assume, and their evident fear, indicates a company unable to cope with, or confront, the corruption that has accompanied its absolute power in the social median marketplace.

You can judge for yourself. You can also ponder whether this is like holding a gun manufacturer liable when one of its guns is used to kill somebody.  I’ll be watching “The Facebook Dilemma” for what it has to say about a technology whose benefits have obscured its harms in the public mind for longer than it probably should have. But then I remember that Facebook barely existed ten years ago. The most important lesson from these Frontline episodes may be how quickly we need to get the stars out of our eyes after meeting these powerful new technologies if we are to have any hope of avoiding their most significant fallout.

Proceed With Caution

I was also struck this week by Apple CEO Tim Cook’s explosive testimony at a privacy conference organized by the European Union.

Not only was Cook bolstering his own company’s reputation for protecting Apple users’ personal information, he was also taking aim at competitors like Google and Facebook for implementing a far more harmful business plan, namely, selling user information to advertisers, reaping billions in ad dollar revenues in exchange, and claiming the bargain is providing their search engine or social network to users for “free.” This is some of what Cook had to say to European regulators this week:

Our own information—from the everyday to the deeply personal—is being weaponized against us with military efficiency. Today, that trade has exploded into a data-industrial complex.

These scraps of data, each one harmless enough on its own, are carefully assembled, synthesized, traded, and sold. This is surveillance. And these stockpiles of personal data serve only to enrich the companies that collect them. This should make us very uncomfortable.

Technology is and must always be rooted in the faith people have in it. We also recognize not everyone sees it that way—in a way, the desire to put profits over privacy is nothing new.

“Weaponized” technology delivered with “military efficiency.” “A data-industrial complex.” One of the benefits of competition is that rivals call you out, while directing unwanted attention away from themselves. One of my problems with tech giant Amazon, for example, is that it lacks a neck-to-neck rival to police its business practices, so Cook’s (and Apple’s) motives here have more than a dollop of competitive self-interest where Google and Facebook are concerned. On the other hand, Apple is properly credited with limiting the data it makes available to third parties and rendering the data it does provide anonymous. There is a bit more to the story, however.

If data privacy were as paramount to Apple as it sounded this week, it would be impossible to reconcile Apple’s receiving more than $5 billion a year from Google to make it the default search engine on all Apple devices. However complicit in today’s tech bargains, Apple pushed its rivals pretty hard this week to modify their business models and become less cynical about their use of our personal data as the focus on regulatory oversight moves from Europe to the U.S.

Keeping Humans in the Tech Equation

Technologies that aren’t proprietary to a particular company but are instead used across industries require getting over additional hurdles to ensure that they are meeting human needs and avoiding technology-specific harms for users and the rest of us. This week, I was reading up on a positive development regarding artificial intelligence (AI) that only came about because serious concerns were raised about the transparency of AI’s inner workings.

AI’s ability to solve problems (from processing big data sets to automating steps in a manufacturing process or tailoring a social program for a particular market) is only as good as the algorithms it uses. Given concern about personal identity markers such as race, gender and sexual preference, you may already know that an early criticism of artificial intelligence was that an author of an algorithm could be unwittingly building her own biases into it, leading to discriminatory and other anti-social results.  As a result, various countermeasures are being undertaken to minimize grounding these kinds of biases in AI code. With that in mind, I read a story this week about another systemic issue with AI processing’s “explainability.”

It’s the so-called “black box” problem. If users of systems that depend on AI don’t know how they work, they won’t trust them. Unfortunately, one of the prime advantages of AI is that it solves problems that are not easily understood by users, which presents the quandary that AI-based systems might need to be “dumbed-down” so that the humans using them can understand and then trust them. Of course, no one is happy with that result.

A recent article in Forbes describes the trust problem that users of machine-learning systems experience (“interacting with something we don’t understand can cause anxiety and make us feel like we’re losing control”) along with some of the experts who have been feeling that anxiety (cancer specialists who agreed with a “Watson for Oncology” system when it confirmed their judgments but thought it was wrong when it failed to do so because they couldn’t understand how it worked).

In a positive development, a U.S. Department of Defense agency called DARPA (or Defense Advanced Research Projects Agency) is grappling with the explainability problem. Says David Gunning, a DARPA program manager:

New machine-learning systems will have the ability to explain their rationale, characterize their strengths and weaknesses, and convey an understanding of how they will behave in the future.

In other words, these systems will get better at explaining themselves to their users, thereby overcoming at least some of the trust issue.

DARPA is investing $2 billion in what it calls “third-wave AI systems…where machines understand the context and environment in which they operate, and over time build underlying explanatory models that allow them to characterize real word phenomena,” according to Gunning. At least with the future of warfare at stake, a problem like “trust” in the human interface appears to have stimulated a solution. At some point, all machine-learning systems will likely be explaining themselves to the humans who are trying to keep up with them.

Moving beyond AI, I’d argue that there is often as much “at stake” as sucessfully waging war when a specific technology is turned into a consumer product that we use in our workplaces and homes.

While there is heightened awareness today about the problems that Facebook poses, few were raising these concerns even a year ago despite their toxic effects. With other consumer-oriented technologies, there are a range of potential harms where little public dissent is being voiced despite serious warnings from within and around the tech industry. For example:

– how much is our time spent on social networks—in particular, how these networks reinforce or discourage certain of our behaviors—literally changing who we are?  
 
– since our kids may be spending more time with their smart phones than with their peers or family members, how is their personal development impacted, and what can we do to put this rabbit even partially back in the hat now that smart phone use seems to be a part of every child’s right of passage into adulthood? 
 
– will privacy and surveillance concerns become more prevalent when we’re even more surrounded than we are now by “the internet of things” and as our cars continue to morph into monitoring devices—or will there be more of an outcry for reasonable safeguards beforehand? 
 
– what are employers learning about us from our use of technology (theirs as well as ours) in the workplace and how are they using this information?

The technologies that we use demand that we understand their harms as well as their benefits. I’d argue our need to become more proactive about voicing our concerns and using the tools at our disposal (including the political process) to insist that company profit and consumer convenience are not the only measures of a technology’s impact.

Since invention of the printing press a half-millennia ago, it’s always been hard but necessary to catch up with technology and to try and tame its excesses as quickly as we can.

This post was adapted from my October 28, 2018 newsletter.

Filed Under: *All Posts, Building Your Values into Your Work, Continuous Learning Tagged With: Amazon, Apple, ethics, explainability, facebook, Google, practical ethics, privacy, social network harms, tech, technology, technology safeguards, the data industrial complex, workplace ethics

How Stepping Back and Really Noticing Can Change Everything

October 14, 2018 By David Griesing Leave a Comment

Pieter Bruegel’s The Battle Between Carnival and Lent

I’m frequently reminded about how oblivious I am, but I had a particularly strong reminder recently. I was in a room with around 30 other people watching a documentary that we’d be discussing when it was over. Because we’d all have a chance to share our strongest impressions and it was a group I cared about, I paid particularly close attention. I even jotted down notes from time to time as something hit me. After the highly emotional end, I led off with my four strongest reactions and then listened for the next half hour while the others described what excited or troubled them. Most startling was how many of their observations I’d missed altogether.

Some of the differences were understandable, why single “eye witness accounts” are often unreliable and we want at least 8 or 12 people on a jury to be sharing their observations during deliberations. No one catches everything, even when you’re watching closely and trying to be insightful later on. Still, I thought I was better at this.

Missing key details and reaching the wrong (or woefully incomplete) conclusions affects much of our work and many of our relationships outside of it. Emotion blinds us. Fear inhibits us from looking long and hard enough. Bias makes us see what we want to see instead of what’s truly there. To get better at noticing involves acknowledging each of these tendencies and making the effort to override them. In other words, it involves putting as little interference as possible between us and what’s staring us in the face.

As luck would have it, a couple of interactive challenges involving our perceptive abilities crossed my transom this week. Given how much I missed in the documentary, I decided to play with both of them to see if looking without prior agendas or other distractions actually improved my ability to notice what’s in front of me. It was also a nice way to take a break from our 24-7 free-for-all in politics. As I sat down to write to you, I thought you might enjoy a brief escape into “how much you’re noticing” too.

The Pieter Bruegel painting above–called “The Battle Between Carnival and Lent”–is currently part of the largest-ever exhibition of the artist’s work at Vienna’s Kunsthistoriches Museum. Bruegel is a giant among Northern Renaissance painters but most of his canvases are in Europe so too few of us have actually seen one, and when we have, they’ve been in books where it’s all but impossible to see what’s actually going on in them. As it turns out, we’ve been missing quite a lot.

Conveniently, the current survey of the artist’s work includes a website that’s devoted to “taking a closer look,” including how Bruegel viewed one of the great moral divides of his time:  between the anything goes spirit of Carnival (the traditional festival for ending the winter and welcoming the spring) and the tie-everything-down season of Lent (the interval of Christian fasting and penance before Good Friday and Easter). “The Battle Between Carnival and Lent” is a feast for noticing, and we’ll savor some of the highlights on its menu below.

First though, before this week I’d never heard about people who are known as “super recognizers.” They’re a very small group of men and women who can see a face (or the photo of one) and, even years later, pick that face out of a crowd with startling speed and accuracy. It’s not extraordinary memory but an entirely different way of reading and later recognizing a stranger’s face.

I heard one of these super recognizers being interviewed this week about his time tracking down suspects and missing persons for Scotland Yard. His pride at bringing a remarkable skill to a valuable use was palpable–the pure joy of finding needles in a succession of haystacks. His interviewer also talked about a link to an on-line exercise for listeners to discover whether they too might be super recognizers. In other words, you can find out how good you are “with faces” and how well you stack up with your peers at recognizing them later on by testing your noticing skills here.  Please let me know whether I’ve helped you to find a new and, from all indications, highly rewarding career. (The test’s administrators will be following up with you if you make the grade.)

Now back to Bruegel.

You can locate this central scene in “The Battle Between Carnival and Lent” in the lower middle range of the painting. Zooming in on it also reveals Bruegel’s greatest innovation as a painter. He gives us a birds-eye view of the full pageant of life that embraces his theme. It’s not the entire picture of “what it was like” in a Flemish town 500 years ago, but viewers had never before been able to get this close to “that much of it” before.

It’s also a canvas populated by peasants and merchants as opposed to saints and nobles. They are alone or in small groups, engaged in their own distinct activities while seemingly ignoring everyone else. In the profusion of life, it’s as if we dropped into the center of any city during lunch hour to eavesdrop.

The painting’s details show a figure representing Carnival on the left. He’s fat, riding a beer barrel and wearing a meat pie as a headdress. Clearly a butcher—from the profession that enabled much of the festival’s feasting—he holds a long spit with a roasted pig as his weapon for the battle to come. Lent, on the other hand, is a grim and gaunt male figure dressed like a nun, sitting on a cart drawn by a monk and real nun. The wagon holds traditional Lenten foods like pretzels, waffles and mussels, and Lent’s weapon of choice is an oven paddle holding a couple of fish, an apparent allusion to the parable of Jesus multiplying the loaves and the fishes for a hungry crowd. On one level then, the fight is over what we should eat at this time of year.

As the eye wanders beyond the comic joust, Carnival’s vicinity includes a tavern filled with revelers, on-lookers watching a popular farce called “The Dirty Bride” (that’s surely worth a closer look!) and a procession of lepers led by a bagpiper. On the other hand, Lent’s immediate orbit shows townsfolk drawing water from the well, giving alms to the poor and going to church (their airs of generosity equally worthy of closer attention).

Not unlike our divided society today, Bruegel painted while the battle for souls during the Reformation was on-going, but instead of taking sides, this painting seems to take an equal opportunity to mock hypocrisy, greed and gluttony wherever he found it, making this and others of his paintings among the first images of social protest since Romans scrawled graffiti on public walls 1200 years before. While earlier paintings by other artists carefully disguised any humor, Bruegel wants you to laugh with him at this spectacle of human folly.

It’s been argued that Bruegel also brings a more serious purpose to his light heartedness, criticizing the common folk by personifying them as a married couple guided by a fool with a burning torch—an image that can be found in almost in the exact center of the painting. The way they are being led suggests that they follow their distractions and baser instincts instead of reason and good judgment. Reinforcing the message is a rutting pig immediately below them (you can find more of him later), symbolizing the destruction that oblivious distraction can leave in its wake.

Everywhere else Bruegel invites his viewers to draw their own conclusions. You can follow this link and notice for yourself the remarkable details of this painting along with others by the artist.  Navigate the way that you would on a Google Map, by clicking the magnifying glass (+) or (-) to zoom in and out, while dragging your cursor to move around the canvas. Be sure to let me know whether you happen upon any of the following during your exploration (the circle dance, the strangely-clad gamblers with their edible game board, the man emptying a bucket on the head of a drunk) and whether you think Carnival or Lent seems to have won the battle.

Before wishing you a good week, I have a final recommendation that brings what we notice (say in a work of art) back to what we notice or fail to notice about one another every day.

The movie Museum Hours is about the relationship that develops between an older man and woman shortly after they meet. Johann used to be a road manager for a hard-rock band but now is a security guard at the same museum in Vienna that houses the Bruegel paintings. Anne has traveled from Canada to visit a cousin who’s been hospitalized and meets Johann as she traverses a strange city. During her visit, he becomes her interpreter, advocate for her cousin’s medical care, and eventually her tour guide.  But just as he finds “the spectacle of spectatorship” at the museum “endlessly interesting” as he takes it in everyday, they both find the observations that they make about one another in the city’s coffee shops and bistros surprising and comforting.

Museum Hours is a movie about the rich details that are often overlooked in our exchanges with one another and that a super observer like Bruegel brings to his examination of everyday life. One of the film’s many reveals takes place in a scene between a tour guide at the museum (who is full of her own insights) and a group of visitors with their unvarnished interpretations in front of  “The Battle Between Carnival and Lent” and other Bruegel paintings. You can view that film clip here, and ask yourself whether the guide is helping the visitors to see what is in front of them or diverting their attention away from it.

As we shuttle between two adults in deepening conversation and very different kinds of exchanges across Vienna, Museum Hours asks several questions, including what any of us hopes to gain from looking at famous paintings on the walls of a museum. As one of the movie’s reviewers wondered:

“Is it to look at fancy paintings and feel cultured, or is it to experience something more direct: to dare to unsheathe oneself of one’s expectations and inhibitions, and truly embrace what a work of art can offer? And then, how could one carry that open mindset to embrace all of life itself? With patient attention and quiet devotion, these are challenges that this film dares to tackle.”

That much open-mindedness is a heady prescription, and probably impossible to manage. But sometimes it’s good to be reminded about how much we’re missing, to remove at least some of our blinders, and to discover what we can still manage to notice when we try.

Note: this post was adapted from my October 14, 2018 Newsletter.

Filed Under: *All Posts, Continuous Learning, Daily Preparation, Using Humor Effectively, Work & Life Rewards Tagged With: bias, Bruegel, distraction, Museum Hours, noticing, perception, seeing clearly, skill of noticing, super recognizers, the Battle Between Carnival and Lent

  • « Previous Page
  • 1
  • …
  • 7
  • 8
  • 9
  • 10
  • 11
  • …
  • 16
  • Next Page »

About David

David Griesing (@worklifeward) writes from Philadelphia.

Read More →

Subscribe to my Newsletter

Join all the others who have new posts, recommendations and links to explore delivered to their inboxes every week. Please subscribe below.

David Griesing Twitter @worklifereward

My Forthcoming Book

WordLifeReward Book

Search this Site

Recent Posts

  • Will AI Make Us Think Less or Think Better? July 26, 2025
  • The Democrat’s Near-Fatal “Boys & Men” Problem June 30, 2025
  • Great Design Invites Delight, Awe June 4, 2025
  • Liberating Trump’s Good Instincts From the Rest April 21, 2025
  • Delivering the American Dream More Reliably March 30, 2025

Follow Me

David Griesing Twitter @worklifereward

Copyright © 2025 David Griesing. All Rights Reserved.

  • Terms of Use
  • Privacy Policy