David Griesing | Work Life Reward Author | Philadelphia

  • Blog
  • About
    • Biography
    • Teaching and Training
  • Book
    • WorkLifeReward
  • Newsletter Archive
  • Contact
You are here: Home / Archives for tech

Whose Values Will Save Us From Our Technology?

March 3, 2019 By David Griesing Leave a Comment

When travelers visited indigenous people in remote places, they often had problems when they took out their cameras and wanted to take some pictures. The locals frequently turned away, in what was later explained as a fear of having their souls stolen by the camera’s image. 

After all, who wants to let a stranger take your soul away? 

Indigenous people have an interesting relationship between their experience and their representations of it. The pictures here are of embroideries made by the Kuna people who live on remote islands off the coast of Panama. I don’t know what these abstractions mean to the Kuna, but I thought they might stand in here for representations of something that’s essential to them, like their souls.

Technology today is probing—with the aim of influencing, for both good and bad purposes—the most human and vital parts of us. For years, the alarm bells have been going off about the impact on our kids of video games (and more recently their on-line multi-player successors), because kids are really “each of us” at our most impressionable.  Unlike many indigenous people however, most kids are blissfully unaware that somebody might be stealing something from them or that their behavior is being modified while they are busy having fun.  And it’s not just kids today who become engaged and vulnerable to those behind their screens whenever they’re playing, shopping or exploring on-line.

Because of their ability to engage and distract, the tightly controlled environments of on-line games provide behavioral scientists and the others who are watching us play them a near perfect Petri dish in which to study the choices we make in the game’s virtual world and how they can influence them. As a result of the data being gathered, the “man behind the curtain” has become increasingly adroit at deepening our engagement, anticipating our behavior once we’re “on the hook,” manipulating what we do next, and ensuring that we stay involved in the game as long as he wants us to.

Games benefit their hosts by selling stuff to players, mostly through ads and marketing play-related gear. Game hosts are also selling the behavioral information they gather from players. Their playbook is the same as Facebook’s:  monetizing behavioral data about its users by selling that data to whoever wants to influence our decision-making in all aspects of our lives and work.

When it comes to gathering particularly rich pools of behavioral data, two games today are noteworthy. Fortnite has become one of the most successful games in history in what seems like a matter of months, while Tetris, one of the first on-line games, has recently been updated for an even wider demographic. This week’s stories about them illustrate the potency of their interactions with players; the precision of the existing behavioral data that these games have utilized; the even more refined behavioral data they are currently gathering on players; and the risks of addiction and other harms to the humans who are playing them.

These stories amount to a cautionary tale, affecting not just the games we play but all of our screen-facing experiences. Are we, as citizens of the enlightened, freedom-loving West, equipped to “save our souls” from these aspiring puppet-masters or is the paternalistic, more harmony-seeking East (particularly China with its great firewalls and social monitors) finding better solutions? It’s actually a fairly simple question— 

Whose values will save us from our technology?

1.            The Rise of Fortnite and Tetris

Five years or an on-line lifetime ago, I wrote about “social benefit games” like WeTopia, wondering whether they could enable us to exercise our abilities to act in “pro-social” or good ways on-line before we went out to change the world. I also worried at the time about the eavesdroppers behind our screens who were studying us while we did so.
 
I heralded their enabling quality, their potential for helping us to behave better, in a post called Game Changer:

The repetitive activities in this virtual world didn’t feel like rote learning because the over-and-over-again was embedded in the diversions of what was, at least at the front end, only a game. Playing it came with surprises  (blinking “opportunities” and “limited time offers”), cheerful reminders (to water my “giving tree” or harvest my carrots), and rewards from all of the “work” I was doing (the “energy,” “experience,” and “good will” credits I kept racking up by remembering to restock Almanzo’s store or to grow my soccer-playing community of friends).

What I’m wondering is whether this kind of immersive on-line experience can change real world behavior.
 
We assume that the proverbial rat in this maze will learn how to press the buzzer with his little paw when the pellets keep coming.
 
Will he (or she) become even more motivated if he can see that a fellow rat, outside his maze, also gets pellets every time he presses his buzzer?
 
And what happens when he leaves the maze?
 
Is this really a way to prepare the shock troops needed to change the world?

In a second post, I looked at what “the man behind the curtain” was doing while the game’s players were learning how to become better people:

He’s a social scientist who has never had more real time information about how and why people behave in the ways that they do (not ever!) than he can gather today by watching hundreds, sometimes even millions of us play these kinds of social games.
 
Why you did one thing and not another. What activities attracted you and which ones didn’t. What set of circumstances got you to use your credit card, or to ask your friends to give you a hand, or to play for 10 hours instead of just 10 minutes.
 
There’s a lot for that man to learn because, quite frankly, we never act more naturally or in more revealing ways than when we’re at play.

In my last post back then, I concluded that there were both upsides and downsides to these kinds of on-line game experiences and agreed to keep an open mind. I still think so, but am far more pessimistic about the downsides today than I was all those years ago. 
 
Fortnite may be the most widely played on-line game ever.  As a hunt/kill your enemies/celebrate your victories kind of adventure, it is similar to hundreds of other on-line offerings. Like them, it has also drawn far more boys than girls as players. What distinguishes the Fortnite experience is the behavioral data that has informed it.
 
In a recent Wall Street Journal article, “How Fortnite Triggered an Unwinnable War Between Parents and Their Boys,” the game’s success is due to the incorporation of existing behavioral data in its base programming and simultaneous machine learning while the game is afoot. Dr Richard Freed, psychologist and author of “Wired Child: Reclaiming Childhood in a Digital Age” says that Fortnite has combined so many “persuasive design elements” (around 200) that it is the talk among his peers. “Something is really different about it,” he said.
 
Ofir Turel who teaches at Cal State Fullerton and researches the effects of social media and gaming, talked about one of those persuasive design elements, namely how Fortnite players experience the kind of random and unpredictable rewards that keep their casino counterparts glued to their seats for hours in front of their slot machines. (Fifty years ago, behaviorist B.F. Skinner found that variable, intermittent rewards were more effective than a predictable pattern of rewards in shaping the habit-forming behavior of pigeons.) “When you follow a reward system that’s not fixed, it messes up our brains eventually,” said Turel. With games like Fortnite, he continues: “We’re all pigeons in a big human experiment.”
 
The impact of these embedded and evolving forms of persuasion were certainly compelling for the boys who are profiled in this article. Toby is a representative kid who wants to play all of the time and whose behavior seems to have changed as a result:

Toby is ‘typically a nice kid,’ his mother said. He is sweet, articulate, creative, precocious and headstrong—the kind of child who can be a handful but whose passion and curiosity could well drive him to greatness. [In other words, the perfect Wall Street Journal reader’s pre-teen.]
 
Turn off Fortnite [however], and he can scream, yell and call his parents names. Toby gets so angry that his parents impose ‘cooling off’ periods of as long as two weeks. His mother said he becomes less aggressive during those times. The calming effect wears off after Fortnite returns.
 
Toby’s mother has tried to reason with him. She has also threatened boarding school. ‘We’re not emotionally equipped to live like this,’ she tells him. ‘This is too intense for the other people living here.’

Of course, it could be years before psychologists and other researchers study larger samples of boys playing Fortnite and report their findings.
 
In the meantime, there was also a story about Tetris in the newspapers this week. Some of you may remember the game from the era of Pokeman. (Essentially, you attempt to navigate blocks or clusters of blocks into the spaces in a container on the screen below, where you drop them in.)  How could a simple, time consuming, 1980’s era diversion like this be harmful, you ask? This time it seems to embed an endless desire for distraction instead of aggression.
 
A 1994 article in Wired magazine identified something called “the Tetris Effect.” Long after they had played the game, many players reported seeing floating blocks in their minds or imaging real-life objects fitting together. At the time, Wired suggested that the game’s effect on the brain made it a kind of “electronic drug” or “pharmatronic.” The random sample of players in this week’s article added further descriptors to the Tetris Effect.   
 
One player named Becky Noback says:

You get the dopamine rush from stacking things up in a perfect order and having them disappear—all that classic Tetris satisfaction. It’s like checking off a to-do list. It gives you this sense of accomplishment.

Another player, Jeremy Ricci, says:

When everything’s clicking, and you’re dropping your pieces, you get into this trancelike rhythm. It’s hard to explain, but when you’re in virtual-reality mode, there’s things going beneath you, and to the side of you, and the music changes, and you’re getting all those elements at the same time, you get a surreal experience.

The article recounts that a Twitter search of both “Tetris Effect” and “cry” or  “tears” will uncover tweets where players are wondering: “Why am I tearing up during a Tetris game?” testifying to the game’s deep emotional resonance. Reviewers of the newest version of the game (out in December) have also called it “spiritual,” “transcendental,” and “an incredible cosmic journey.”
 
What is prompting these reactions? While the basic game is the same as ever, the newest version surrounds the block-dropping action with fantastical environments, ambient new age music, and, occasionally, a soothing lyric like “what could you be afraid of?” Add virtual reality goggles, and a player can float in space surrounded by stars while luminescent jellyfish flutter by and mermaids morph into dolphins. When a player drops in his or her blocks, the audio pulses gently and the hand-held controller vibrates.  While the new version of the game may seem aimed at seekers of “trippy experiences,” in fact it is being marketed as an immersive stress-relieving diversion. It is here where Tetris aims for a market beyond the usually game-playing crowd, which skews younger and more male. (Think of all those new pigeons!). Almost everyone wants a stress reliever after all.
 
You can see a preview of the new Tetris for yourself (with or without your virtual reality headsets) via this link.
 
These early assessments of Fortnite and Tetris only provide anecdotal evidence, but we seem to be entering a strange new world where a game’s interface and those gathering information and manipulating our behavior behind it have taken over more of our attention, healthy detachment, and ability to think for ourselves.

Another one from the Kuna people, San Blas Islands, Panama

2.            Go East or Go West?

In “Fortnite Addiction: China Has the Answer,” David Mattin discusses China’s assessment of the problem, the solution that its social monitors are implementing today, and why their approach just might make sense for us too. Mattin is the Global Head of Trends and Insights (a great job title!) at TrendWatching and sits on the World Economic Forum’s Global Futures Council. He put up his provocative post on Medium recently.

It’s been widely reported that China is rolling out what Mattin calls “an unprecedented, tech-fueled experiment in surveillance and social control.” Under this system, citizens are assigned a “social credit rating” that scores each citizen’s worth as a citizen.  It is a system that seems shocking to us in the West, but it follows centuries of maintaining a social order based on respect for elders (from the “celestial kingdom of rulers” on down) and a quest for harmony in the community as a whole.  The Chinese government intends to have a compulsory rating system in place nationwide by 2020. Individuals with low social credit scores have already been denied commercial loans, building permits and school admissions. Mattin reports that low scorers have also been blocked from taking 11.4 million flights and 4.2 million train rides. This system is serious about limiting personal freedoms for the sake of collective well being.  

Like many of us, China’s monitors have become alarmed by reports of Fortnite addiction and Tencent, the world’s sixth largest internet company, recently started using camera-enabled facial recognition technology to restrict the amount of time that young people play a multi-player video game called Honor of Kings that’s similar to Fortnite.  As in the West, the concern is about the impact of these gaming technologies on young people’s developing brains and life prospects.

Under government pressure, Tencent first trialled the new system back in October. Now they’ve announced they’ll implement facial recognition-based age restrictions on all their games in 2019. Under 12s will be locked out after one hour per day; 13–18-year olds are allowed two hours. And here’s the crucial detail: the system is fuelled by the national citizen database held by China’s Ministry of Public Security. Yes, if you’re playing Honor of Kings in China now, you’re being watched via your webcam or phone camera and checked against a vast government database of headshots.

Because we can’t imagine it happening here doesn’t mean it’s the wrong approach, and in the most interesting part of his argument, Mattin partially explains why.
 
He begins with the observation that priorities in Chinese society and the trade-offs they’re making are different—something that’s hard (if not impossible) for most of us Westerners, overtaken by our feelings of superiority, to understand.  
 
– “Enlightenment liberal values are not  the only way to produce a successful society” and 
 
– “value judgments are trade-offs; you can have a bit more of one good if you tolerate having a bit less of another.”
 
Indeed, it is how all ethical systems work; some values are always more important than others in decision-making. Mattin drives his point home this way:

What western liberal democrats have never had to countenance seriously is the idea that theirs are not the only values mandated by reason and morality; that they’re not the universal end point of history and the destination for all successful societies. In fact, they’re just some values among many others that are equally valid. And choosing between ultimate values is always a case of trading some off against others.

Mattin rubs it in even further with an uncomfortable truth and a question for additional pondering.  
 
For us, as the supposed champions of freedom and the rights of every individual, the uncomfortable truth is that every single day “we’re actively deciding (albeit unconsciously) that we hold other values — such as convenience and distraction — above that of individual liberty.” So it is not really government interference with our freedoms that we reject; it is anyone else’s right to interfere with our convenience or right to be distracted as much as we want anytime that we want. Of course, I’ve been making a similar argument when urging that new restraints be placed on tech giants like Facebook, Amazon and Google. However uncomfortable it is to hear, our insatiable desires for convenience and distraction are simply not more important than preventing the harms these companies are causing to our political process, privacy rights, and competitive markets. Even so, in the U.S. at least, we seem to making very little progress in any of these areas because we supposedly stand for freedom.
 
Mattin’s what-if question to mull over is as follows. What if the evidence mounts that excessive screen time playing games and otherwise really is damaging young people’s (or maybe everyone’s) minds and that China’s government-imposed time restrictions really do limit the damage? How will the West respond to “the spectacle of a civilisation founded on a very different package of values — but one that can legitimately claim to promote human flourishing more vigorously than their own”?

3.            Whose Values Should Help Us To Decide?

I have a few additional questions of my own.
 
If empowering a Big Brother (or Sister) like China’s in the West is unpalatable, how can a distracted public that is preoccupied by its conveniences be roused enough to counter tech-related harms with democratically determined cures?
 
Do we need to be confronted by an epidemic of anti-social gamers before we act?  Since an epidemic of opioid addiction and “deaths of despair” hasn’t roused the citizenry (or its elected representatives) to initiate a meaningful response, why would it be any different here?
 
Even if we had The Will to pursue solutions, could safety nets ever be put into place quickly enough to protect the generations that are playing Fortnite, Tetris and other games today? After all, democracy is cumbersome, time-consuming.
 
And continuing this thought, can democratic governments ever hope to “catch up to” and protect their citizens from rapidly evolving and improving technologies with troubling human impacts? Won’t the men and women behind our screens always be ahead of a non-authoritarian government’s ability to constrain them?
 
I hope you’ll let me know what you think the answers might be.

This post is adapted from my March 3, 2019 newsletter.

Filed Under: *All Posts, Being Part of Something Bigger than Yourself, Work & Life Rewards Tagged With: behavioral data collection, behavioral manipulation through on-line games, Chinese values, David Mattin, Enlightenment values, Fortnite, on-iine games, persuasive design elements, tech, technology, Tetris, values, video games

These Tech Platforms Threaten Our Freedom

December 9, 2018 By David Griesing Leave a Comment

We’re being led by the nose about what to think, buy, do next, or remember about what we’ve already seen or done.  Oh, and how we’re supposed to be happy, what we like and don’t like, what’s wrong with our generation, why we work. We’re being led to conclusions about a thousand different things and don’t even know it.

The image that captures the erosion of our free thinking by influence peddlers is the frog in the saucepan. The heat is on, the water’s getting warmer, and by the time it’s boiling it’s too late for her to climb back out. Boiled frog, preceded by pleasantly warm and oblivious frog, captures the critical path pretty well. But instead of slow cooking, it’s shorter and shorter attention spans, the slow retreat of perspective and critical thought, and the final loss of freedom.

We’ve been letting the control booths behind the technology reduce the free exercise of our lives and work and we’re barely aware of it. The problem, of course, is that the grounding for good work and a good life is having the autonomy to decide what is good for us.

This kind of tech-enabled domination is hardly a new concern, but we’re wrong in thinking that it remains in the realm of science fiction.

An authority’s struggle to control our feelings, thoughts and decisions was the theme of George Orwell’s 1984, which was written 55 years before the fateful year that he envisioned. “Power,” said Orwell, “is in tearing human minds to pieces and putting them together again in new shapes of your own choosing.” Power persuades you to buy something when you don’t want or need it. It convinces you about this candidate’s, that party’s or some country’s evil motivations. It tricks you into accepting someone else’s motivations as your own. In 1984, free wills were weakened and constrained until they were no longer free. “If you want a picture of the future,” Orwell wrote, “imagine a boot stamping on a human face—for ever.”

Maybe this reflection of the present seems too extreme to you.

After all, Orwell’s jackbooted fascists and communists were defeated by our Enlightenment values. Didn’t the first President Bush, whom we buried this week, preside over some of it? The authoritarians were down and seemed out in the last decade of the last century—Freedom Finally Won!—which just happened to be the very same span of years when new technologies and communication platforms began to enable the next generation of dominators.

(There is no true victory over one man’s will to deprive another of his freedom, only a truce until the next assault begins.)

20 years later, in his book Who Owns the Future (2013), Jaron Lanier argued that a new battle for freedom must be fought against powerful corporations fueled by advertisers and other “influencers” who are obsessed with directing our thoughts today.

In exchange for “free” information from Google, “free” networking from Facebook, and “free” deliveries from Amazon, we open our minds to what Lanier calls “siren servers,” the cloud computing networks that drive much of the internet’s traffic. Machine-driven algorithms collect data about who we are to convince us to buy products, judge candidates for public office, or determine how the majority in a country like Myanmar should deal with a minority like the Rohingya.

Companies, governments, groups with good and bad motivations use our data to influence our future buying and other decisions on technology platforms that didn’t even exist when the first George Bush was president but now, only a few years later, seem indispensible to nearly all of our commerce and communication. Says Lanier:

When you are wearing sensors on your body all the time, such as the GPS and camera on your smartphone and constantly piping data to a megacomputer owned by a corporation that is paid by ‘advertisers” to subtly manipulate you…you are gradually becoming less free.

And all the while we were blissfully unaware that this was happening because the bath was so convenient and the water inside it seemed so warm. Franklin Foer, who addresses tech issues in The Atlantic and wrote 2017’s World Without Mind: The Existential Threat of Big Tech, talks about this calculated seduction in an interview he gave this week:

Facebook and Google [and Amazon] are constantly organizing things in ways in which we’re not really cognizant, and we’re not even taught to be cognizant, and most people aren’t… Our data is this cartography of the inside of our psyche. They know our weaknesses, and they know the things that give us pleasure and the things that cause us anxiety and anger. They use that information in order to keep us addicted. That makes [these] companies the enemies of independent thought.

The poor frog never understood that accepting all these “free” invitations to the saucepan meant that her freedom to climb back out was gradually being taken away from her.

Of course, we know that nothing is truly free of charge, with no strings attached. But appreciating the danger in these data driven exchanges—and being alert to the persuasive tools that are being arrayed against us—are not the only wake-up calls that seem necessary today. We also can (and should) confront two other tendencies that undermine our autonomy while we’re bombarded with too much information from too many different directions. They are our confirmation bias and what’s been called our illusion of explanatory depth.

Confirmation bias leads us to stop gathering information when the evidence we’ve gathered so far confirms the views (or biases) that we would like to be true. In other words, we ignore or reject new information, maintaining an echo chamber of sorts around what we’d prefer to believe. This kind of mindset is the opposite of self-confidence, because all we’re truly interested in doing outside ourselves is searching for evidence to shore up our egos.

Of course, the thought controllers know about our propensity for confirmation bias and seek to exploit it, particularly when we’re overwhelmed by too many opposing facts, have too little time to process the information, and long for simple black and white truths. Manipulators and other influencers have also learned from social science that our reduced attention spans are easily tricked by the illusion of explanatory depth, or our belief that we understand things far better than we actually do.

The illusion that we know more than we think we do extends to anything that we can misunderstand. It comes about because we consume knowledge widely but not deeply, and since that is rarely enough for understanding, our same egos claim that we know more than we actually do. For example, we all know that ignorant people are the most over-confident in their knowledge, but how easily we delude ourselves about the majesty of our own ignorance.  For example, I regularly ask people questions about all sorts of things that they might know about. It’s almost the end of the year as I write this and I can count on one hand the number of them who have responded to my questions by saying “I don’t know” over the past twelve months.  Most have no idea how little understanding they bring to whatever they’re talking about. It’s simply more comforting to pretend that we have all of this confusing information fully processed and under control.

Luckily, for confirmation bias or the illusion of explanatory depth, the cure is as simple as finding a skeptic and putting him on the other side of the conversation so he will hear us out and respond to or challenge whatever it is that we’re saying. When our egos are strong enough for that kind of exchange, we have an opportunity to explain our understanding of the subject at hand. If, as often happens, the effort of explaining reveals how little we actually know, we are almost forced to become more modest about our knowledge and less confirming of the biases that have taken hold of us.  A true conversation like this can migrate from a polarizing battle of certainties into an opportunity to discover what we might learn from one another.

The more that we admit to ourselves and to others what we don’t know, the more likely we are to want to fill in the blanks. Instead of false certainties and bravado, curiosity takes over—and it feels liberating precisely because becoming well-rounded in our understanding is a well-spring of autonomy.

When we open ourselves like this instead of remaining closed, we’re less receptive to, and far better able to resist, the “siren servers” that would manipulate our thoughts and emotions by playing to our biases and illusions. When we engage in conversation, we also realize that devices like our cell phones and platforms like our social networks are, in Foer’s words, actually “enemies of contemplation” which are” preventing us from thinking.”

Lanier describes the shift from this shallow tech-driven stimulus/response to a deeper assertion of personal freedom in a profile that was written about him in the New Yorker a few years back.  Before he started speaking at a South-by-Southwest Interactive conference, Lanier asked his audience not to blog, text or tweet while he spoke. He later wrote that his message to the crowd had been:

If you listen first, and write later, then whatever you write will have had time to filter through your brain, and you’ll be in what you say. This is what makes you exist. If you are only a reflector of information, are you really there?

Lanier makes two essential points about autonomy in this remark. Instead of processing on the fly, where the dangers of bias and illusions of understanding are rampant, allow what is happening “to filter through your brain,” because when it does, there is a far better chance that whoever you really are, whatever you truly understand, will be “in” what you ultimately have to say.

His other point is about what you risk becoming if you fail to claim a space for your freedom to assert itself in your lives and work. When you’re reduced to “a reflector of information,” are you there at all anymore or merely reflecting the reality that somebody else wants you to have?

We all have a better chance of being contented and sustained in our lives and work when we’re expressing our freedom, but it’s gotten a lot more difficult to exercise it given the dominant platforms that we’re relying upon for our information and communications today.

This post was adapted from my December 9, 2018 newsletter.

Filed Under: *All Posts, Building Your Values into Your Work, Continuous Learning, Work & Life Rewards Tagged With: Amazon, autonomy, communication, confirmation bias, facebook, Franklin Foer, free thinking, freedom, Google, illusion of explanatory depth, information, information overhoad, Jaron Lanier, tech, tech platforms, technology

Choosing a Future For Self-Driving Cars

November 5, 2018 By David Griesing Leave a Comment

It looks pretty fine, doesn’t it?

You’ll no longer need a car of your own because this cozy little pod will come whenever you need it. All you’ll have to do is to buy a 30- or 60-ride plan and press “Come on over and get me” on your phone.

You can’t believe how happy you’ll be to have those buying or leasing, gas, insurance and repair bills behind you, or to no longer have a monstrosity taking up space and waiting for those limited times when you’ll actually be driving it. Now you’ll be able to get where you need to go without any of the “sunk costs” because it’ll be “pay as you go.”

And go you will. This little pod will transport you to work or the store, to pick up your daughter or your dog while you kick back in comfort. It will be an always on-call servant that lets you stay home all weekend, while it delivers your groceries and take-out orders, or brings you a library book or your lawn mower from the repair shop. You’ll be the equivalent of an Amazon Prime customer who can have nearly every material need met wherever you are—but instead of “same day service,” you might have products and services at your fingertips in minutes because one of these little pods will always be hovering around to bring them to you. Talk about immediate gratification!

They will also drive you to work and be there whenever you need a ride home. In a fantasy commute, you’ll have time to unwind in comfort with your favorite music or by taking a nap. Having one of these pods whenever you want one will enable you to work from different locations, to have co-workers join you when you’re working from home, and to work while traveling instead of paying attention to the road. You can’t believe how much your workday will change.

Doesn’t all this money saving, comfort, convenience and freedom sound too good to be true? Well, let’s step back for a minute.

We thought Facebook’s free social network and Amazon’s cheap and convenient take on shopping were unbelieveably wonderful too—and many of us still do. So wonderful that we built them into the fabric of our lives in no time. In fact, we quickly claim new comforts, conveniences and cost savings as if we’ve been entitled to them all along. It’s only as the dazzle of these new technology platforms begin to fade into “taking them for granted” that we also begin to wonder (in whiffs of nostalgia and regret? in concerns for their unintended consequences?) about what we might have given up by accepting them in the first place.

Could it be:

-the loss of chunks of our privacy to advertisers and data-brokers who are getting better all the time at manipulating our behavior as consumers and citizens;

-the gutting of our Main Streets of brick & mortar retail, like book and hardware stores, and the attendant loss of centers-of-gravity for social interaction and commerce within communities; or

-the elimination of entry-level and lower-skilled jobs and of entire job-markets to automation and consolidation, the jobs you had as a teenager or might do again as you’re winding down, with no comparable work opportunities to replace them?

Were the efficiency, comfort and convenience of these platforms as “cost-free” as they were cracked up to be? Is Facebook’s and Amazon’s damage already done and largely beyond repair? Have tech companies like them been defining our future or have we?

Many of us already depend on ride-sharing companies like Uber and Lyft. They are the harbingers of a self-driving vehicle industry that promise to disrupt our lives and work in at least the following ways. They will largely eliminate the need to own a car. They will transform our transportation systems, impacting public transit, highways and bridges. They will streamline how goods and services are moved in terms of logistics and delivery. And in the process, they will change how the entire “built environment” of urban centers, suburbs, and outer ring communities will look and function, including where we’ll live and how we’ll work. Because we are in many ways “a car-driven culture,” self-driving vehicles will impact almost everything that we currently experience on a daily basis.

That’s why it is worth all of our thinking about this future before it arrives.

Our Future Highways

One way to help determine what the future should look like and how it should operate is to ask people—lots of them—what they’d like to see and what they’re concerned about. In fact, it’s an essential way to get public buy-in to new technology before some tech company’s idea of that future is looking us in the eye, seducing us with its charms, and hoping we won’t notice its uglier parts.

When it comes to self-driving cars, one group of researchers is seeking informed buy-in by using input from the public to influence the drafting of the decision-making algorithms behind these vehicles. In the so-called Moral Machine Experiment, these researchers asked people around the world for their preferences regarding  the moral choices that autonomous cars will be called upon to make so that this new technology can match human values as well as its developer’s profit motives.  In an article that just appeared in the journal Nature, the following remarks describe their ambitious objective.

With the rapid development of artificial intelligence have come concerns about how machines will make moral decisions and the major challenge of quantifying societal expectations about the ethical principles that should guide machine behaviour. To address this challenge we deployed the Moral Machine, an on-line experimental platform designed to explore the moral dilemmas faced by autonomous vehicles. This platform gathered 40 million decisions [involving individual moral preferences] in ten languages from millions of people in 233 different countries and territories. Here we describe the results of this experiment…

Never in the history of humanity have we allowed a machine to autonomously decide who shall live and who shall die, in a fraction of a second, without real-time supervision. We are going to cross that bridge any time now, and it will not happen in a distant theater of military operations; it will happen in the most mundane aspect of our lives, everyday transportation.  Before we allow our cars to make ethical decisions, we need to have a global conversation to express our preferences to the companies that will design moral algorithms, and to the policymakers who will regulate them.

For a sense of the moral guidance the Experiment was seeking, think of an autonomous car that is about to crash but cannot save everyone in its path. Which pre-programmed trajectory should it choose? One which injures (or kills) two elderly people while sparing a child? One which spares a pedestrian who is waiting to cross safely while injuring (or killing) a jaywalker? You see the kinds of moral quandaries we will be asking these cars to make. If peoples’ moral preferences can be taken into account beforehand, the public might be able to recognize “the human face” in a new technology from the beginning instead of having to attempt damage control once that technology is in use.

Strong Preferences, Weaker Preferences

To collect its data, the Moral Machine Experiment asked millions of global volunteers to consider accident scenarios that involved 9 different moral preferences: sparing humans (versus pets); staying on course (versus swerving); sparing passengers (versus pedestrians); sparing more lives (versus fewer lives); sparing men (versus women); sparing the young (versus the old); sparing pedestrians who cross legally (versus jaywalkers), sparing the fit (versus the less fit); and sparing those with higher social status (versus lower social status).

The challenges behind the Experiment were daunting and much of the article is about how the researchers conducted their statistical analysis. Notwithstanding these complexities, three “strong” moral preferences emerged globally, while certain “weaker” but statistically relevant preferences suggest the need for modifications in algorithmic programming among the three different “country clusters” that the Experiment identified.

The vast majority of participants in the Experiment expressed a “strong” moral preference for saving a life instead of refusing to swerve, saving as many lives as possible if an accident is imminent, and saving young lives wherever possible.

Among “weaker” preferences, there were variations among countries that clustered in the Northern (Europe and North America), Eastern (most of Asia) and Southern (including Latin America) Hemispheres. For example, the preference for sparing young (as opposed to old) lives is much less pronounced in countries in the Eastern cluster and much higher among the Southern cluster. Countries that are poorer and have weaker enforcement institutions are more tolerant than richer and more law abiding countries of people who cross the street illegally. Differences between hemispheres might result in adjustments to the decision-making algorithms of self-driving cars that are operated there.

When companies have data about what people view as “good” or “bad”, “better” or “worse” while a new technology is being developed, these preferences can improve the likelihood that moral harms will be identified and minimized beforehand.

Gridlock

Another way to help determine what the future should look like and how new technologies should operate is to listen to what today’s Cassandras are saying. Following their commentary and grappling with their concerns removes some of the dazzle in our hopes and grounds them more firmly in reality early on.

It lets us consider how, say, an autonomous car will fit into the ways that we live, work and interact with one another today—what we will lose as well as what we are likely to gain. For example, what industries will they change? How will our cities be different than they are now? Will a proliferation of these vehicles improve the quality of our interactions with one another or simply reinforce how isolated many of us are already in a car-dominated culture?

The Atlantic magazine hosts a regular podcast called “Crazy Genius” that asks “big questions” and draws “provocative conclusions about technology and culture” (Many thanks to reader Matt K for telling me about it!) You should know that these podcasts are free and can be easily accessed through services like iTunes and Spotify.

A Crazy Genius episode from September called “How Self-Driving Cars Could Ruin the American City” included interviews with two experts who are looking into the future of autonomous vehicles and are alarmed for reasons beyond these vehicles’ decision-making abilities. One is Robin Chase, the co-founder of Zipcar. The “hellscape” she forecasts involves everyone using self-driving cars as they become cheaper than current alternatives to do our errands, provide 10-minute deliveries and produce even more sedentary lifestyles than we have already, while clogging our roadways with traffic.

Without smart urban planning, the result will be infernal congestion, choking every city and requiring local governments to lay ever-more pavement down to service American automania.

Eric Avila is an historian at UCLA who sees self-driving cars in some of the same ways that he views the introduction of the interstate highway system in the 1950s. While these new highways provided autonomous access to parts of America that had not been accessible before, there was also a dark side. 48,000 miles of new highway stimulated interstate trade and expanded development but they also gutted urban neighborhoods, allowing the richest to take their tax revenues with them as they fled to the suburbs. “Mass transit systems [and] streetcar systems were systematically dismantled. There was national protest in diverse urban neighborhoods throughout the entire nation,” Avila recalls, and a similar urban upheaval may follow the explosion of autonomous vehicles.

Like highways, self-driving cars are not only cars they are also infrastructure. According to Avila, if we want to avoid past mistakes all of the stakeholders in this new technology will need to think about how they can make downtown areas more livable for humans instead of simply more efficient for these new machines. To reduce congestion, this may involve taxing autonomous vehicle use during certain times of day, limiting the number of vehicles in heavily traveled areas, regulating companies who operate fleets of self-driving cars, and capping private car ownership. Otherwise, the proliferation of cars and traffic would make most of our cities unlivable.

Once concerns like Chase’s and Avila’s are publicized, data about the public’s preferences (what’s better, what’s worse?) in these regards can be gathered just as they were in the Moral Machine Experiment. Earlier in my career, I ran a civic organization that attempted to improve the quality of Philadelphia city government by polling citizens anonymously about their priorities and concerns. While the organization did not survive the election of a reform-minded administration, information about the public’s preferences is always available when we champion the value of collecting it. All that’s necessary is sharing the potential problems and concerns that have been raised and asking people in a reliable and transparent manner how they’d prefer to address them.

In order to avoid the harms from technology platforms that we are facing today, the tech companies that are bringing us their marvels need to know far more about their intended users’ moral preferences than they seem interested in learning about today. With the right tools to be heard at our fingertips, we can all be involved in defining our futures.

This post is adapted from my November 4, 2018 newsletter.

Filed Under: *All Posts, Being Part of Something Bigger than Yourself, Continuous Learning Tagged With: autonomous vehicles, Crazy Genius podcast, ethics, future, future shock, machine ethics, Moral Machine Experiment, moral preferences, priorities, smart cars, tech, technology, values

Looking Out For the Human Side of Technology

October 28, 2018 By David Griesing Leave a Comment

Maintaining human priorities in the face of new technologies always feels like “a rearguard action.” You struggle to prevent something bad from happening even when it seems like it may be too late.

The promise of the next tool or system intoxicates us. Smart phones, social networks, gene splicing.  It’s the super-computer at our fingertips, the comfort of a boundless circle of friends, the ability to process massive amounts of data quickly or to short-cut labor intensive tasks, the opportunity to correct genetic mutations and cure disease. We’ve already accepted these promises before we pause to consider their costs—so it always feels like we’re catching up and may not have done so in time.

When you’re dazzled by possibility and the sun is in your eyes, who’s thinking “maybe I should build a fence?”

The future that’s been promised by tech giants like Facebook is not “the win-win” that we thought it was. Their primary objectives are to serve their financial interests—those of their founder-owners and other shareholders—by offering efficiency benefits like convenience and low cost to the rest of us. But as we’ve belattedly learned, they’ve taken no responsibility for the harms they’ve also caused along the way, including exploitation of our personal information, the proliferation of fake news and jeopardy to democratic processes, as I argued here last week.

Technologies that are not associated with particular companies also run with their own promise until someone gets around to checking them–a technology like artificial intelligence or AI for example. From an ethical perspective, we are usually playing catch up ball with them too. If there’s a buck to be made or a world to transform, the discipline to ask “but should we?” always seems like getting in the way of progress.

Because our lives and work are increasingly impacted, the stories this week throw additional light on the technology juggernaut that threatens to overwhem us and our “rearguard” attempts to tame it with our human concerns.

To gain a fuller appreciation of the problem regarding Facebook, a two-part Frontline doumentary will be broadcasting this week that is devoted to what one reviewer calls “the amorality” of the company’s relentless focus on adding users and compounding ad revenues while claiming to create the on-line “community” that all of us should want in the future.  (The show airs tomorrow, October 29 at 9 p.m. and on Tuesday, October 30 at 10 p.m. EST on PBS.)

Frontline’s reporting covers Russian election interference, Facebook’s role in whipping Myanmar’s Buddhists into a frenzy over its Rohingya minority, Russian interference in past and current election cycles, and how strongmen like Rodrigo Duterte in the Phillipines have been manipulating the site to achieve their political objectives. Facebook CEO Mark Zuckerberg’s limitations as a leader are explored from a number of directions, but none as compelling as his off-screen impact on the five Facebook executives who were “given” to James Jacoby (the documentary’s director, writer and producer) to answer his questions. For the reviewer:

That they come off like deer in Mr. Jacoby’s headlights is revealing in itself. Their answers are mealy-mouthed at best, and the defensive posture they assume, and their evident fear, indicates a company unable to cope with, or confront, the corruption that has accompanied its absolute power in the social median marketplace.

You can judge for yourself. You can also ponder whether this is like holding a gun manufacturer liable when one of its guns is used to kill somebody.  I’ll be watching “The Facebook Dilemma” for what it has to say about a technology whose benefits have obscured its harms in the public mind for longer than it probably should have. But then I remember that Facebook barely existed ten years ago. The most important lesson from these Frontline episodes may be how quickly we need to get the stars out of our eyes after meeting these powerful new technologies if we are to have any hope of avoiding their most significant fallout.

Proceed With Caution

I was also struck this week by Apple CEO Tim Cook’s explosive testimony at a privacy conference organized by the European Union.

Not only was Cook bolstering his own company’s reputation for protecting Apple users’ personal information, he was also taking aim at competitors like Google and Facebook for implementing a far more harmful business plan, namely, selling user information to advertisers, reaping billions in ad dollar revenues in exchange, and claiming the bargain is providing their search engine or social network to users for “free.” This is some of what Cook had to say to European regulators this week:

Our own information—from the everyday to the deeply personal—is being weaponized against us with military efficiency. Today, that trade has exploded into a data-industrial complex.

These scraps of data, each one harmless enough on its own, are carefully assembled, synthesized, traded, and sold. This is surveillance. And these stockpiles of personal data serve only to enrich the companies that collect them. This should make us very uncomfortable.

Technology is and must always be rooted in the faith people have in it. We also recognize not everyone sees it that way—in a way, the desire to put profits over privacy is nothing new.

“Weaponized” technology delivered with “military efficiency.” “A data-industrial complex.” One of the benefits of competition is that rivals call you out, while directing unwanted attention away from themselves. One of my problems with tech giant Amazon, for example, is that it lacks a neck-to-neck rival to police its business practices, so Cook’s (and Apple’s) motives here have more than a dollop of competitive self-interest where Google and Facebook are concerned. On the other hand, Apple is properly credited with limiting the data it makes available to third parties and rendering the data it does provide anonymous. There is a bit more to the story, however.

If data privacy were as paramount to Apple as it sounded this week, it would be impossible to reconcile Apple’s receiving more than $5 billion a year from Google to make it the default search engine on all Apple devices. However complicit in today’s tech bargains, Apple pushed its rivals pretty hard this week to modify their business models and become less cynical about their use of our personal data as the focus on regulatory oversight moves from Europe to the U.S.

Keeping Humans in the Tech Equation

Technologies that aren’t proprietary to a particular company but are instead used across industries require getting over additional hurdles to ensure that they are meeting human needs and avoiding technology-specific harms for users and the rest of us. This week, I was reading up on a positive development regarding artificial intelligence (AI) that only came about because serious concerns were raised about the transparency of AI’s inner workings.

AI’s ability to solve problems (from processing big data sets to automating steps in a manufacturing process or tailoring a social program for a particular market) is only as good as the algorithms it uses. Given concern about personal identity markers such as race, gender and sexual preference, you may already know that an early criticism of artificial intelligence was that an author of an algorithm could be unwittingly building her own biases into it, leading to discriminatory and other anti-social results.  As a result, various countermeasures are being undertaken to minimize grounding these kinds of biases in AI code. With that in mind, I read a story this week about another systemic issue with AI processing’s “explainability.”

It’s the so-called “black box” problem. If users of systems that depend on AI don’t know how they work, they won’t trust them. Unfortunately, one of the prime advantages of AI is that it solves problems that are not easily understood by users, which presents the quandary that AI-based systems might need to be “dumbed-down” so that the humans using them can understand and then trust them. Of course, no one is happy with that result.

A recent article in Forbes describes the trust problem that users of machine-learning systems experience (“interacting with something we don’t understand can cause anxiety and make us feel like we’re losing control”) along with some of the experts who have been feeling that anxiety (cancer specialists who agreed with a “Watson for Oncology” system when it confirmed their judgments but thought it was wrong when it failed to do so because they couldn’t understand how it worked).

In a positive development, a U.S. Department of Defense agency called DARPA (or Defense Advanced Research Projects Agency) is grappling with the explainability problem. Says David Gunning, a DARPA program manager:

New machine-learning systems will have the ability to explain their rationale, characterize their strengths and weaknesses, and convey an understanding of how they will behave in the future.

In other words, these systems will get better at explaining themselves to their users, thereby overcoming at least some of the trust issue.

DARPA is investing $2 billion in what it calls “third-wave AI systems…where machines understand the context and environment in which they operate, and over time build underlying explanatory models that allow them to characterize real word phenomena,” according to Gunning. At least with the future of warfare at stake, a problem like “trust” in the human interface appears to have stimulated a solution. At some point, all machine-learning systems will likely be explaining themselves to the humans who are trying to keep up with them.

Moving beyond AI, I’d argue that there is often as much “at stake” as sucessfully waging war when a specific technology is turned into a consumer product that we use in our workplaces and homes.

While there is heightened awareness today about the problems that Facebook poses, few were raising these concerns even a year ago despite their toxic effects. With other consumer-oriented technologies, there are a range of potential harms where little public dissent is being voiced despite serious warnings from within and around the tech industry. For example:

– how much is our time spent on social networks—in particular, how these networks reinforce or discourage certain of our behaviors—literally changing who we are?  
 
– since our kids may be spending more time with their smart phones than with their peers or family members, how is their personal development impacted, and what can we do to put this rabbit even partially back in the hat now that smart phone use seems to be a part of every child’s right of passage into adulthood? 
 
– will privacy and surveillance concerns become more prevalent when we’re even more surrounded than we are now by “the internet of things” and as our cars continue to morph into monitoring devices—or will there be more of an outcry for reasonable safeguards beforehand? 
 
– what are employers learning about us from our use of technology (theirs as well as ours) in the workplace and how are they using this information?

The technologies that we use demand that we understand their harms as well as their benefits. I’d argue our need to become more proactive about voicing our concerns and using the tools at our disposal (including the political process) to insist that company profit and consumer convenience are not the only measures of a technology’s impact.

Since invention of the printing press a half-millennia ago, it’s always been hard but necessary to catch up with technology and to try and tame its excesses as quickly as we can.

This post was adapted from my October 28, 2018 newsletter.

Filed Under: *All Posts, Building Your Values into Your Work, Continuous Learning Tagged With: Amazon, Apple, ethics, explainability, facebook, Google, practical ethics, privacy, social network harms, tech, technology, technology safeguards, the data industrial complex, workplace ethics

Confronting the Future of Work Together

October 21, 2018 By David Griesing Leave a Comment

Some of us believe that entrepreneurs can lead us to a better future through their drive and innovation. Steve Jobs, Bill Gates and, at least until recently, Elon Musk fill these bubbles of belief. We’ve also come to believe that these masters of business and the organizations they lead can bring us into the warmer glow of what’s good for us—and much of the rest of the world believes in this kind of progress too. Amazon brings us the near perfect shopping experience, Google a world of information at our fingertips, Uber a ride whenever we want one, Instagram pictures that capture what everyone is seeing, the Gates Foundation the end of disease as we know it…

In the process, we’ve also become a little (if not a lot) more individualist and entrepreneurial ourselves, with some of that mindset coming from the American frontier. We’re more likely to want to “go it alone” today, criticize those who lack the initiative to solve their own problems, be suspicious of the government’s helping hand, and turn away from the hard work of building communities that can problem-solve together. In the meantime, Silicon Valley billionaires will attend to the social ills we are no longer able to address through the political process with their insight and genius.

In this entrepreneurial age, has politics become little more than a self-serving proposition that gives us tax breaks and deregulation or is it still a viable way to pursue “what all of us want and need” in order to thrive in a democratic society?

Should we meet our daily challenges by emulating our tech titans while allowing them to improve our lives in the ways they see fit, or should we instead be strengthening our communities and solving a different set of problems that we all share together?

In many ways, the quality of our lives and our work in the future depends on the answer to these questions, and I’ve been reading two books this week that approach them from different angles, one that came out last year  (“Earning the Rockies: How Geography Shapes America’s Role in the World” by Robert D. Kaplan ) and the other a few weeks ago (“Winners Take All: The Elite Charade of Changing the World” by Anand Giridharadas). I recommend both of them.

There are too many pleasures in Kaplan’s “Earning the Rockies” to do justice to them here, but he makes a couple of observations that provide a useful frame for looking into our future as Americans.  As a boy, Kaplan traveled across the continent with his dad and gained an appreciation for the sheer volume of this land and how it formed the American people that has driven his commentary ever since.  In 2015, he took that road trip again, and these observations follow what he saw between the East (where he got in his car) and the West (when his trip was done):

Frontiers [like America’s] test ideologies like nothing else. There is no time for the theoretical. That, ultimately, is why America has not been friendly to communism, fascism, or other, more benign forms of utopianism. Idealized concepts have rarely taken firm root in America and so intellectuals have had to look to Europe for inspiration. People here are too busy making money—an extension, of course, of the frontier ethos, with its emphasis on practical initiative…[A]long this icy, unforgiving frontier, the Enlightenment encountered reality and was ground down to an applied wisdom of ‘commonsense’ and ‘self evidence.’ In Europe an ideal could be beautiful or liberating all on its own, in frontier America it first had to show measurable results.

[A]ll across America I rarely hear anyone discussing politics per se, even as CNN and Fox News blare on monitors above the racks of whisky bottles at the local bar…An essay on the online magazine Politico captured the same mood that I found on the same day in April, 2015 that the United States initiated an historic nuclear accord with Iran, the reporter could not find a single person at an Indianapolis mall who knew about it or cared much…This is all in marked contrast to Massachusetts where I live, cluttered with fine restaurants where New Yorkers who own second homes regularly discuss national and foreign issues….Americans [between the coasts] don’t want another 9/11 and they don’t want another Iraq War. It may be no more complex than that. Their Jacksonian tradition means they expect the government to keep them safe and hunt down and kill anyone who threatens their safety…Inside these extremes, don’t bother them with details.

Moreover, practical individualism that’s more concerned about living day to day than in making a pie-in-the-sky world is not just in the vast fly-over parts of America, but also well-represented on the coasts and (at least according to this map) in as much as half of Florida.

What do Kaplan’s heartland Americans think of entrepreneurs with their visions of social responsibility who also have the practical airs of frontier-conquering individualism?

What do the coastal elites who are farther from that frontier and more inclined towards ideologies for changing the world think about these technocrats, their companies and their solutions to our problems?

What should any of us think about these Silicon Valley pathfinders and their insistence on using their wealth and awesome technologies to “do good” for all of our sakes–even though we’ve never asked them to?

Who should be making our brave new world, us or them?

These tech chieftans and their increasingly dominant companies all live in their own self-serving bubbles according to Giridharadas in “Winners Take All.” (The quotes below are from an interview he gave about it last summer and an online book review that appeared in Knowdedge@Wharton this week).

Giridharadas first delivered his critique a couple of years ago when he spoke as a fellow at the Aspen Institute, a regular gathering of the America’s intellectual elite. He argued that these technology companies believe that all the world’s problems can be solved by their entrepreneurial brand of “corporate social responsibility,” and that their zeal for their brands and for those they want to help can be a “win-win” for both. In other words, what’s good for Facebook (Google, Amazon, Twitter, Uber, AirBnB, etc.) is also good for everyone else. The problem, said Giridharadas in his interview, is that while these companies are always taking credit for the efficiencies and other benefits they have brought, they take no responsibility whatsoever for the harms:

Mark Zuckerberg talks all the time about changing the world. He seldom calls Facebook a company — he calls it a “community.” They do these things like trying to figure out how to fly drones over Africa and beam free internet to people. And in various other ways, they talk about themselves as building the new commons of the 20th century. What all that does is create this moral glow. And under the haze created by that glow, they’re able to create a probable monopoly that has harmed the most sacred thing in America, which is our electoral process, while gutting the other most sacred thing in America, our free press.

Other harms pit our interests against theirs, even when we don’t fully realize it. Unlike a democratic government that is charged with serving every citizen’s interest, “these platform monopolists allow everyone to be part of their platform but reap the majority of benefits for themselves, and make major decisions without input from those it will affect.” According to Giridharadas, the tech giants are essentially “Leviathan princes” who treat their users like so many “medieval peasants.”

In their exercise of corporate social responsibility, there is also a mismatch between the solutions that the tech entrepreneurs can and want to bring and the problems we have that need to be solved. “Tending to the public welfare is not an efficiency problem,” Giridharadas says in his interview. “The work of governing a society is tending to everybody. It’s figuring out universal rules and norms and programs that express the value of the whole and take care of the common welfare.” By contrast, the tech industry sees the world more narrowly. For example, the fake news controversy lead Facebook not to a comprehensive solution for providing reliable informtion but to what Giridharadas calls “the Trying-to-Solve-the-Problem-with-the-Tools-that-Caused-It” quandary.

The Tech Entrepreneur Bubble

Notwithstanding these realities, ambitious corporate philanthropy provides the tech giants with useful cover—a rationale for us “liking” them however much they are also causing us harm. Giridharadas describes their two-step like this:

What I started to realize was that giving had become the wingman of taking. Generosity had become the wingman of injustice. “Changing the world” had become the wingman of rigging the system…[L]ook at Andrew Carnegie’s essay “Wealth”. We’re now living in a world created by the intellectual framework he laid out: extreme taking, followed by and justified by extreme giving.

Ironically, the heroic model of the benevolent entrepreneur is sustained by our comfort with elites “who always seem to know better” on the right and left coasts of America and with rugged individualists who have managed to make the most money in America’s heartland. These leaders and their companies combine utopian visions based on business efficiency with the aura of success that comes with creating opportunities on the technological frontier. Unfortunately, their approach to social change also tends to undermine the political debate that is necessary for the many problems they are not attempting to solve.

In Giridharadas’ mind, there is no question that these social responsibility initiatives “crowd out the public sector, further reducing both its legitimacy and its efficacy, and replace civic goals with narrower concerns about efficiency and markets.” We get not only the Bezos, Musk or Gates vision of social progress but also the further sidelining of public institutions like Congress, and our state and local governments. A far better way to create the lives and work that we want in the future is by reinvigorating our politics.

* * *

Robert Kaplan took another hard look at the land that has sustained America’s spirit until now.  Anand Giridharadas challenged the tech elites that are intent on solving our problems in ways that serve their own interests. One sees an opportunity, the other an obstacle to the future that they want. I don’t know exactly how the many threads exposed by these two books will come together and help us to confront the daunting array of challenges we are facing today, including environmental change, job loss through automation, and the failure to understand the harms from new technologies (like social media platforms, artificial intelligence and genetic engineering) before they start harming us.  Still, I think at least two of their ideas will be critical in the days ahead.

The first is our need to be skeptical of the bubbles that limit every elites’ perspective, becoming more knowledgeable as individuals and citizens about the problems we face and their possible solutions. It is resisting the temptation to give over that basic responsibility to people or companies that keep telling us they are smarter, wiser or more successful than we are and that all we have to do is to trust them given all of the wonderful things they are doing for us. We need to identify our shared dreams and figure out how to realize them instead of giving that job to somebody else.

The second idea involves harnessing America’s frontier spirit one more time. There is something about us “as a people” that is motivated by the pursuit of practical, one-foot-in-front-of–the-other objectives (instead of ideological ones) and that trusts in our ability to claim the future that we want. Given Kaplan’s insights and Giridharadas’ concerns, we need political problem-solving on the technological frontier in the same way that we once came together to tame the American West. It’s where rugged individualism joins forces with other Americans who are confronting similar challenges.

I hope that you’ll get an opportunity to dig into these books, that you enjoy them as much as I am, and that you’ll let me know what you think about them when you do.

This post was adapted from my October 21, 2018 newsletter.

Filed Under: *All Posts, Being Part of Something Bigger than Yourself, Entrepreneurship Tagged With: Anand Giridharadas, frontier, future of work, Robert D. Kaplan, rugged individualism, Silicon Valley, tech, tech entrepreneurs, technology

About David

David Griesing (@worklifeward) writes from Philadelphia.

Read More →

Subscribe to my Newsletter

Join all the others who have new posts, recommendations and links to explore delivered to their inboxes every week. Please subscribe below.

David Griesing Twitter @worklifereward

My Forthcoming Book

WordLifeReward Book

Search this Site

Recent Posts

  • Great Design Invites Delight, Awe June 4, 2025
  • Liberating Trump’s Good Instincts From the Rest April 21, 2025
  • Delivering the American Dream More Reliably March 30, 2025
  • A Place That Looks Death in the Face, and Keeps Living March 1, 2025
  • Too Many Boys & Men Failing to Launch February 19, 2025

Follow Me

David Griesing Twitter @worklifereward

Copyright © 2025 David Griesing. All Rights Reserved.

  • Terms of Use
  • Privacy Policy