David Griesing | Work Life Reward Author | Philadelphia

  • Blog
  • About
    • Biography
    • Teaching and Training
  • Book
    • WorkLifeReward
  • Newsletter Archive
  • Contact
You are here: Home / Blog

Companies That Wear Their Values on Their Sleeves

March 31, 2019 By David Griesing Leave a Comment

We lead with our values because it’s easier to connect with somebody who shares our priorities, but it’s a trickier proposition for companies that want our loyalty because we have rarely been so divided about what is important to us. 

Deepening divisions over our common commitments make Apple’s roll-out of a suite of new services this week both riveting—and potentially fraught.

As the company’s newly announced services like AppleTV+ and AppleNews+ tie themselves more closely to Hollywood and the cover stories that sell glossy magazines, is Apple cloaking itself in values that could alienate or confuse many of the customers it aims to bond with more closely?

In 1997, on the eve of launching a new, national advertising campaign, Steve Jobs gave a short talk about the link between Apple’s values and its customers. “At our core, we believe in making the world a better place,” he said.  So our ads will “honor people who have changed the world for the better—[because] if they ever used a computer, it would be a Mac.”  In its marketing, Apple aligned itself with tech innovators like Thomas Edison, a genius who had already changed the world as Jobs and Apple were about to do.

A little more than 20 years later, with Apple following in the footsteps of Edison’s light-bulb company to industry dominance, the question is whether it would have a better chance of preserving that dominance by once again aligning itself with technology innovators who have already changed the world instead of those, like Steven Spielberg and Oprah Winfrey, who aim to do so by relying on their own approaches to social betterment? To set the stage for your possible reactions, here is a link, beginning with Spielberg and ending with Winfrey, that covers the highlights from Apple’s new services launch this past week in a vivid, all-over-the-place 15 minutes.

I should confess that I have a bit of a horse in this race because I want Apple to keep winning by bringing me world-class products and customer care, but I’m not sure that it can by pursuing services (like entertainment, news and games through the new AppleArcade) that put the company in lockstep with industries that muddy its focus and dilute its values proposition.

Instead of bringing me a global version of Oprah’s book club or more of Steven Speilberg’s progressive reminiscences, I was hoping to hear that Apple would be providing—even in stages over months or years—an elegantly conceived and designed piece of technology that would (1) allow me to cut the cable cord with my internet provider while (2) upgrading me to an interface where I could see and pay for whatever visual programming I choose whenever I want it. An ApplePay-enabled entertainment router. Now that would be the kind of tech innovation that would change my world for the better again (and maybe yours too) while staying true to its founder’s messaging from twenty-odd years ago.  

Tech commentator Christopher Mims talked this week about why Apple was claiming values (like those embedded in Oprah’s notion of social responsibility) to maintain its market share, but he never really debated whether it should do so. I’d argue that Apple should continue to make its own case for the social benefits of its tech solutions instead of mixing its message about priorities with the aspirations of our celebrity culture.

When it comes to Silicon Valley’s mostly hypocritcal claims about social responsibility, I start with the skepticism of observers like Anand Giridharadas. To him, Facebook, Google, Amazon and Apple all use values as tools to gain the profits they’re after, covering their self-serving agendas with feel-good marketing.

In a post last October, I discussed some of his observations about Facebook (and by implication, most of the others) in the context of his recent book, Winners Take All.

The problem, said Giridharadas, is that while these companies are always taking credit for the efficiencies and other benefits they have brought, they take no responsibility whatsoever for the harms… In their exercise of corporate social responsibility, there is a mismatch between the solutions that the tech entrepreneurs can and want to bring and the problems we have that need to be solved. “Tending to the public welfare is not an efficiency problem, The work of governing a society is tending to everybody. It’s figuring out universal rules and norms and programs that express the value of the whole and take care of the common welfare.” By contrast, the tech industry sees the world more narrowly. For example, the fake news controversy lead Facebook not to a comprehensive solution for providing reliable information but to what Giridharadas calls “the Trying-to-Solve-the-Problem-with-the-Tools-that-Caused-It” quandary.

In the face of judgments like his, I’d argue that Apple should be vetting its corporate messaging with the inside guidance of those who understand the power of values before it squanders the high ground it still holds. 
 
Beyond “sticking with the tech innovations that it’s good at” and the Edison-type analogies that add to their luster, what follows are three proposals for how the company might build on its values-based loyalty while still looking us in the eye when it does so.
 
Each one has Apple talking about what its tech-appreciating customers still care about most when they think—with healthy nostalgia—about all the things that Apple has done for them already.

The fate that the company is aiming to avoid

1.         Apple should keep reminding us about its unparalleled customer service

The unbelievable service I have come to expect from Apple feeds my brand loyalty. I feel that we share the value of trustworthiness. When someone relies on me for something, I stand behind it instead of telling them I don’t have the time or it’s too expensive to fix. For me, Apple has consistently done the same.
 
So I was surprised when I had to argue a bit harder than I thought was necessary for Apple’s battery fix for an older iPhone, and I started following other customer complaints against the company to see if a change of priorities was in the air. Since I’m writing to you on a MacBook Air, problems with the Air’s later generation keyboards have apparently escalated to the point that national class-action litigation is in the offing. Not unlike the iPhone battery fix, Apple has gone on record as being willing to replace any sticky keyboard for free within 4 years of purchase, but is it really as easy as it sounds? As recently as last week, there was this plea to Apple from a tech reviewer in a national newspaper after profiling a litany of customer difficulties:

For any Apple engineers and executives reading: This is the experience you’re providing customers who shell out $1200 or more—sometimes a lot more. This is the experience after THREE attempts at this keyboard design.

When you are one of the richest companies in history, relatively inexpensive problems like this need to be solved before they get this far. A reputation for world-class customer service is a terrible thing to waste. Be glad to fix your technology on those rare occasions when it breaks down, and solve the technology problem with these keyboards before you sell or replace any more of them. Don’t make customers who were loyal enough to pay a premium for an Apple device take you to court because they can’t get enough of your attention any other way. Caring for your customers is a core value that needs polishing before its shine begins to fade and your customer loyalty slips away.

2.         Apple should keep telling us how much it’s different from Google, Facebook and Amazon

The uses that the dominant tech platforms are making of our personal data are on everyone’s mind.
 
Beyond invasions of privacy, deep concerns are also being voiced about the impact of “surveillance capitalism” on Western democracy, not only because of meddling with our elections but, even more fundamentally, because of how this new economic model disrupts “the organic reciprocities involving employment and consumption” that undergird democratic market systems. These are profound and increasingly wide-spread concerns, and Apple for one seems to share them. 
 
This is from another post last October called “Looking Out For the Human Side of Technology”: 

I was also struck this week by Apple CEO Tim Cook’s explosive testimony at a privacy conference organized by the European Union…:
 
‘Our own information—from the everyday to the deeply personal—is being weaponized against us with military efficiency. Today, that trade has exploded into a data-industrial complex.
 
These scraps of data, each one harmless enough on its own, are carefully assembled, synthesized, traded, and sold. This is surveillance. And these stockpiles of personal data serve only to enrich the companies that collect them. This should make us very uncomfortable.
 
Technology is and must always be rooted in the faith people have in it. We also recognize not everyone sees it that way—in a way, the desire to put profits over privacy is nothing new.’
 
‘Weaponized’ technology delivered with ‘military efficiency.’ ‘A data-industrial complex.’ One of the benefits of competition is that rivals call you out, while directing unwanted attention away from themselves…so Cook’s (and Apple’s) motives here have more than a dollop of competitive self-interest where [companies like] Google and Facebook are concerned. On the other hand, Apple is properly credited with limiting the data it makes available to third parties and rendering the data it does provide anonymous. There is a bit more to the story, however.
 
If data privacy were as paramount to Apple as it sounded this week, it would be impossible to reconcile Apple’s receiving more than $5 billion a year from Google to make it the default search engine on all Apple devices.

In its star-studded launch of TV, News and Arcade services this week, Apple’s presenters always reiterated that none of these services would be “ad selling models” targeting Apple users. They’re good reminders about Apple’s values but…while $5B is a lot of revenue to forsake if new purchasers of Apple devices got to pick their own search engines, it’s also a significant amount of support for an antithetical business model.

Not selling my data to others for use against me, like Apple’s standing behind the functionality of my devices, are core contributors to the company’s good reputation in my mind, and never more so than today. If Apple continues to differentiate itself from its competitors on the use of our data—and I think that it should—it needs to find ways to be more forthright about its own conflicts of interest while doing so.

When you stand on your values and back it up with your actions, the competitors you are challenging will always seek to undermine you when your actions are inconsistent with your words. Why let that inconsistency be tomorrow’s headline, Tim? Why not be ready to talk more forthrightly about the quandary with Google, and how the company is trying to address it, when asked to comment before a “gotcha” story like this gets published?

3.         Apple should get ahead of its new services launch by proactively addressing likely problems with real consequences for the rest of us

In its roll-out presentation, Apple announced plans for a new service that will link players to games that Apple will be creating. Few tech areas have seen more startling impacts from the use of behavioral data that’s being gathered from players by those who are behind these on-line games. I recently talked here about how the programmers and monitors behind Fortnite and the updated Tetris games are using data about how players react to as many as 200 “persuasive design elements” in these games to enhance the intensity of the player experience while making it more addictive to boys and other susceptible populations. 

Apple’s engineers know about these issues already. Its programmers are making its new games ready for primetime as soon as next fall. To differentiate itself from others in the on-line gaming industry, to strike a more principled note than its competitors have, and to broaden the scope of Apple’s values when it comes to personal data, the company could tell us some or all of the following in the coming months:

-whether it will be using behavioral data it generates from players through real time play to make its games more absorbing or addictive;

-whether it intends to restrict certain classes of users (like pre-teen boys) from playing certain games or restrict the hours that they can play them;

-what other safeguards it will be implementing to limit the amount of “player attention” that these games will be capturing;

-whether it will be selling game-related merchandise in the Apple store so its financial incentives to encourage extensive game-playing are clear from the outset; and

-whether it will be using data about player behavior to encourage improved learning, collaborative problem-solving, community building and other so-called “pro-social” skills in any of the games it will be offering.

I have no reason to doubt that Apple is serious about protecting the user data that its devices and services generate. Its new venture into gaming provides an opportunity to build on Apple’s reputation for safeguarding the use of its customers’ information. Tim Cook and Apple need to be talking to the rest of us, both now and next fall, about how it will be applying its data-related values to problems its customers care about today in the brave new world of on-line gaming.

+ + +

Apple’s stated values will hold its current customers and attract new ones when there is “a match” between the company’s solutions and the problems the rest of us have that need solving. Affiliation and loyalty grow when there are shared priorities in communities, in politics and in the marketplace.

That means Apple should keep talking about what its tech-appreciating customers care about most in the light of the wonders that Apple has given us already. Despite its recently announced forays into entertainment, the company should never take its eye too far from what it does best—which is to make world-changing devices—even when they take more time to develop than short-term financial performance seems to demand. 

When a company knows what it is and acts accordingly, it can always take risks for the rewards that can come from wearing its values on its sleeves. 

This post was adapted from my March 31, 2019 newsletter. You can subscribe (to the right) and receive it in your inbox every Sunday morning.

Filed Under: *All Posts, Being Part of Something Bigger than Yourself, Building Your Values into Your Work Tagged With: Anand Giridharadas, Apple, behavioral data, Christopher Mims, core corporate values, corporate values, customer service, data, gaming, personal data use, priorities, Steve Jobs, surveillance capitalism, tech platforms, Tim Cook, values

The Human Purpose Behind Smart Cities

March 24, 2019 By David Griesing Leave a Comment

It is human priorities that should be driving Smart City initiatives, like the ones in Toronto profiled here last week. 

Last week’s post also focused on a pioneering spirit in Toronto that many American cities and towns seem to have lost. While we entrench in the moral righteousness of our sides in the debate—including, for many, a distrust of collective governance, regulation and taxation—we drift towards an uncertain future instead of claiming one that can be built on values we actually share. 

In its King Street and Quayside initiatives, Toronto is actively experimenting with the future it wants based on its residents’ commitment to sustaining their natural environment in the face of urban life’s often toxic impacts.  They’re conducting these experiments in a relatively civil, collaborative and productive way—an urban role model for places that seem to have forgotten how to work together. Toronto’s bold experiments are also utilizing “smart” technologies in their on-going attempts to “optimize” living and working in new, experimental communities.

During a short trip this week, I got to see the leading edges of New York City’s new Hudson Yards community (spread over 28 acres with an estimated $25 billion price tag) and couldn’t help being struck by how much it catered to those seeking more luxury living, shopping and workspaces than Manhattan already affords. In other words, how much it could have been a bold experiment about new ways that all of its citizens might live and work in America’s first city for the next half-century, but how little it actually was. A hundred years ago, one of the largest immigrant migrations in history made New York City the envy of the world. With half of its current citizens being foreign-born, perhaps the next century, unfurling today, belongs to newer cities like Toronto.

Still, even with its laudable ambition, it will not be easy for Toronto and other future-facing communities to get their Smart City initiatives right, as several of you were also quick to remind me last week. Here is a complaint from a King Street merchant that one of you (thanks Josh!) found and forwarded that seems to cast what is happening in Toronto in a less favorable light than I had focused upon it:

What a wonderful story. But as with [all of] these wonderful plans some seem to be forgotten. As it appears are the actual merchants. Google certainly a big winner here. Below an excerpt written by one of the merchants:
   
‘The City of Toronto has chosen the worst time, in the worst way, in the worst season to implement the pilot project. Their goal is clearly to move people through King St., not to King St. For years King St. was a destination, now it is a thoroughfare.
 
‘The goal of the King St. Pilot project was said to be to balance three important principles: to move people more effectively on transit, to support business and economic prosperity and to improve public space. In its current form, the competing principles seem to be decidedly tilted away from the economic well-being of merchants and biases efficiency over convenience. The casual stickiness of pedestrians walking and stopping at stores, restaurants and other merchants is lost.
 
‘Additionally, the [transit authority] TTC has eliminated a number of stops along King St., forcing passengers to walk further to enter and disembark streetcars, further reducing pedestrian traffic and affecting areas businesses. The TTC appears to believe that if they didn’t have to pick up and drop off people, they could run their system more effectively.
 
‘The dubious benefits of faster street car traffic on King St. notwithstanding, the collateral damage of the increased traffic of the more than 20,000 cars the TTC alleges are displaced from King St to adjoining streets has turned Adelaide, Queen, Wellington and Front Sts. into a gridlock standstill. Anyone who has tried to navigate the area can attest that much of the time, no matter how close you are you can’t get there from here.
 
‘Along with the other merchants of King St. and the Toronto Entertainment District we ask that Mayor Tory and Toronto council to consider a simple, reasonable and cost-effective alternative. Put lights on King St. that restrict vehicle traffic during rush hours, but return King St. to its former vibrant self after 7 p.m., on weekends and statutory holidays. It’s smart, fair, reasonable and helps meet the goals of the King St. pilot project. 

Two things about this complaint seemed noteworthy. The first is how civil and constructive this criticism is in a process that hopes to “iterate” as real time impacts are assessed. It’s a tribute that Toronto’s experiments not only invite but are also receiving feedback like this. Alas, the second take-away from Josh’s comment is far more nettlesome. “[However many losers there may be along the way:] Google certainly a big winner here.”

The tech giant’s partnership with Canada’s governments in Toronto raises a constellation of challenging issues, but it’s useful to recall that pioneers who dare to claim new frontiers always do so with the best technology that’s available. While the settling of the American West involved significant collateral damage (to Native Americans and Chinese migrants, to the buffalo and the land itself), it would not have been possible without existing innovations and new ones that these pioneers fashioned along the way. Think of the railroads, the telegraph poles, even something as low-tech as the barbed wire that was used to contain livestock. 

The problem isn’t human and corporate greed or heartless technology—we know about them already—but failing to recognize and reduce their harmful impacts before it is too late. The objective for pioneers on new frontiers should always be maximizing the benefits while minimizing the harms that can be foreseen from the very beginning instead of looking back with anger after the damage is done.

We have that opportunity with Smart City initiatives today.

Because they concentrate many of the choices that will have to be made when we boldly dare to claim the future of America again, I’ve been looking for a roadmap through the moral thicket in the books and articles that are being written about these initiatives today. Here are some of the markers that I’ve discovered.

Human priorities, realized with the help of technology

1.         Markers on the Road to Smarter and More Vibrant Communities

The following insights come almost entirely from a short article by Robert Kitchin, a professor at Maynooth University in Ireland. In my review of the on-going conversation about Smart Cities, I found him to be one of its most helpful observers.  

In his article, Kitchin discusses the three principal ways that smart cities are understood, the key promises smart initiatives make to stakeholders, and the perils to be avoided around these promises.

Perhaps not surprisingly, people envision cities and other communities “getting smarter” in different ways. One constituency sees an opportunity to improve both “urban regulation and governance through instrumentation and data-driven systems”–essentially, a management tool. A bolder and more transformative vision sees information and communication technology “re-configur[ing] human capital, creativity, innovation, education, sustainability, and management,” thereby “produc[ing] smarter citizens, workers and public servants” who “can enact polic[ies], produce better products… foster indigenous entrepreneurship and attract inward investment.” The first makes the frontier operate more efficiently while the second improves nearly every corner of it.

The third Smart City vision is “a counter-weight or alternative” to each of them. It wants these technologies “to promote a citizen-centric model of development that fosters social innovation and social justice, civic engagement and hactivism, and transparent and accountable governance.” In this model, technology serves social objectives like greater equality and fairness. Kitchin reminds us that these three visions are not mutually exclusive. It seems to me that the priorities embedded in a community’s vision of a “smarter” future could include elements of each of them, functioning like checks and balances, in tension with one another. 

Smart City initiatives promise to solve pressing urban problems, including poor economic performance; government dysfunction; constrained mobility; environmental degradation; a declining quality of life, including risks to safety and security; and a disengaged, unproductive citizen base. Writes Kitchin:

the smart city promises to solve a fundamental conundrum of cities – how to reduce costs and create economic growth and resilience at the same time as producing sustainability and improving services, participation and quality of life – and to do so in commonsensical, pragmatic, neutral and apolitical ways.

Once again, it’s a delicate balancing act with a range of countervailing interests and constituencies, as you can see in the chart from a related discussion above.
 
The perils of Smart Cities should never overwhelm their promise in my view, but urban pioneers should always have them in mind (from planning through implementation) because some perils only manifest themselves over time. According to Kitchin, the seven dangers in pursuing these initiatives include:
 
–taking “a ‘one size fits all’ approach, treating cities as generic markets and solutions [that are] straightforwardly scalable and movable”;
 
–assuming that initiatives are “objective and non-ideological, grounded in either science or commonsense.” You can aim for these ideals, but human and organizational preferences and biases will always be embedded within them.
 
–believing that the complex social problems in communities can be reduced to “neatly defined technical problems” that smart technology can also solve. The ways that citizens have always framed and resolved their community problems cannot be automated so easily. (This is also the thrust of Ben Green’s Smart Enough City: Putting Technology in Its Place to Reclaim Our Urban Future, which will be published by MIT Press in April. In it he argues for “smart enough alternatives” that are attainable with the help of technology but never reducible to technology solutions alone.)
 
–engaging with corporations that are using smart city technologies “to capture government functions as new market opportunities.” One risk of a company like Google to communities like Toronto’s is that Google might lock Toronto in to its proprietary technologies and vendors over a long period of time or use Toronto’s citizen data to gain business opportunities in other cities.
 
–becoming straddled with “buggy, brittle and hackable” systems that are ever more “complicated, interconnected and dependent on software” while becoming more resistant to manual fixes.
 
–becoming victimized by “pervasive dataveillance that erodes privacy” through practices like “algorithmic social sorting (whether people get a loan, a tenancy, a job, etc), dynamic pricing (whereby different people pay varying prices depending on their perceived customer value) and anticipatory governance using predictive profiling (wherein data precedes how a person is policed and governed).” Earlier this month, my post on popular on-line games like Fortnite highlighted the additional risk that invasive technologies can use the data they are gathering to change peoples’ behavior.
 
-and lastly, reinforcing existing power structures and inequalities instead of eroding or reconfiguring them.
 
While acknowledging the promise of Smart Cities at their best, Kitchin closes his article with this cautionary note:

the realities of implementation are messier and more complex than the marketing hype of corporations or city managers portray and there are a number of social, political, ethical and legal concerns with respect to the kind of society smart city initiatives seek to create.  As such, whilst networked urbanism has benefits, it also poses challenges and risks that are often little explored or legislated for ahead of implementation. Indeed, the pace of development and rollout of smart city technologies is proceeding well ahead of wider reflection, critique and regulation.

Putting the cart before a suitably-designed horse is a problem with all new and seductive technologies that get embraced before their harms are identified or can be addressed—a quandary that was also considered here in a post called “Looking Out for the Human Side of Technology.”

2.         The Value of Our Data

A few additional considerations about the Smart City are also worth bearing in mind as debate about these initiatives intensifies.

In a March 8, 2019 post, Kurtis McBride wrote about two different ways “to value” the data that these initiatives will produce, and his distinction is an important one. It’s a discussion that citizens, government officials and tech companies should be having, but unfortunately are not having as much as they need to.

When Smart City data is free to everyone, there is the risk that the multinationals generating it will merely use it to increase their power and profits in the growing market for Smart City technologies and services. From the residents’ perspective, McBride argues that it’s “reasonable for citizens to expect to see benefit” from their data, while noting that these same citizens will also be paying dearly for smart upgrades to their communities. His proposal on valuing citizen data depends on how it will be used by tech companies like Google or local service providers. For example, if citizen data is used:

to map the safest and fastest routes for cyclists across the city and offers that information free to all citizens, [the tech company] is providing citizen benefit and should be able to access the needed smart city data free of charge. 
 
But, if a courier company uses real-time traffic data to optimize their routes, improving their productivity and profit margins – there is no broad citizen benefit. In those cases, I think it’s fair to ask those organizations to pay to access the needed city data, providing a revenue stream cities can then use to improve city services for all. 

Applying McBride’s reasoning, an impartial body in a city like Toronto would need to decide whether Google has to pay for data generated in its Quayside community by consulting a benefit-to-citizens standard. Clearly, if Google wanted to use Quayside data in a Smart City initiative in say Colorado or California, it would need to pay Toronto for the use of its citizens’ information.
 
Of course, addressing the imbalance between those (like us) who provide the data and the tech companies that use it to increase their profits and influence is not just a problem for Smart City initiatives, and changing the “value proposition” around our data is surely part of the solution. In her new book Age of Surveillance Capitalism: the Fight for a Human Future in the New Frontier of Power, Harvard Business School’s Shoshana Zuboff says that “you’re the product if these companies aren’t paying you for your data” does not state the case powerfully enough. She argues that the big tech platforms are like elephant poachers and our personal data like those elephants’ ivory tusks. “You are not the product,” she writes. “You are the abandoned carcass.”
 
Smart City initiatives also provide a way to think about “the value of our data” in the context of our living and working and not merely as the gateway to more convenient shopping, more addictive gaming experiences or  “free” search engines like Googles’.

This post is adapted from my March 24, 2019 newsletter. Subscribe today and receive an email copy of future posts in your inbox each week.

Filed Under: *All Posts, Being Part of Something Bigger than Yourself, Building Your Values into Your Work, Entrepreneurship, Work & Life Rewards Tagged With: entrepreneurship, ethics, frontier, future of cities, future of work, Google, Hudson Yards, innovation, King Street, pioneer, priorities, Quayside, Robert Kitchin, smart cities, Smart City, smart city initiatives, technology, Toronto, urban planning, value of personal data, values

Whose Values Will Drive Our Future?

March 17, 2019 By David Griesing Leave a Comment

When people decide what is most important to them—and bother to champion it in conversation, in voting, in how they act everyday—they are helping to build the future.

It’s not just making noise, but that’s part of it. For years, when Emily was in grade school, the argument for an all-girl education was that the boys dominated the classroom with their antics and opinions while the girls were ignored or drowned out. Those making the most noise hog the attention, at least at first.

Later on, it’s about the quality of your opinions and the actions that back them up. Power and money in the commons of public life is not synonymous with good commitments or actions, but it does purchase a position with facts and experts and a platform to share it that can hold its own (if not prevail) in the wider debate over what the future will be like and what its trade-offs will cost. Not unlike the over-powered girls in grade school, it takes courage to stand up against what the best-organized, best-financed and most dominant corporate players want.

Part of the problem with these companies today is that many of them are nurturing, so that they can also cater to, lower-level priorities that we all have. For example, we all want convenience in our daily lives and to embrace a certain amount of distraction. The future that some companies want to deliver to us aims at catering to these (as opposed to other) priorities in the most efficient and profitable manner. For example:

-companies like Amazon profit by providing all the convenience you could ever want as a shopper, or

-when when your aim is relief from boredom or stress, social media, on-line games and search engines like Google provide wonderlands of distraction to lose yourself in.

Moreover, with the behavioral data these companies are harvesting from you whenever you’re on their platforms, they’ll hook you with even greater conveniences, forms of escapism and more stuff to buy in the future. Their priorities of efficiency and profit almost perfectly dovetail with ours for convenience and distraction.

This convenient and distracted future—along with a human yearning for something more—is captured with dazzling visuals and melancholy humor in Wall-E, a 10-year old movie from Pixar. (It’s worth every minute for a first or second view of a little trash robot named Wall-E’s bid to save the human race from itself.) In its futuristic world, the round-as-donuts humans who have fled the planet they’ve soiled spend their days on a We’ve-Thought-of-Everything cruise ship that’s floating through space. Except, as it turns out, the ship’s operators aren’t providing everything their passengers want and need, or want and need even more, like a thriving planet to call home.

Wall-E’s brilliance doesn’t come from an either/or future, but from a place where more important priorities are gradually acknowledged and acted upon too. It’s deciding to have more of some things and somewhat less of others. Back in the real world, that change in priorities might involve diverting some of our national resources away from economic efficiency and profit to support thriving families and communities (January 27, 2019 newsletter). Or, as in Wall-E’s case, using fewer of our shared resources for convenience and distraction and more for restoring an environment that can sustain our humanity in deeper ways.

On the other hand, as anyone who has tried it knows: it can be hard to find enough courage to stand up to those who are dominating (while they’re also subverting) the entire conversation about what we should want most. It’s our admiration for Wall-E’s kind of courage that makes Toronto’s citizens so inspiring today. Why these northern neighbors?  Because they are trying through their actions to meet a primary shared objective—which is to build a sustainable urban environment that protects its natural resources—without losing sight of other priorities like efficiency, convenience and strengthening the bonds of family and community in their city.

And as if that weren’t enough, there is another wrinkle to the boldness that Torontonians are currently demonstrating. The City is partnering with tech giant Google on a key piece of data-driven redevelopment. As we admire them from afar, maybe we can also learn some lessons about how to test-drive a carbon-free future while helping that future to evolve with data we provide as we live and work. This fascinating and hopeful city is raising the kinds of questions that can only be asked when a place has the courage to stop talking about its convictions and start acting on them.

I was walking in lower Manhattan this week when I caught the sign above, encouraging me to bring my hand in for a palm reading.

I knew the fortuneteller wouldn’t find my future there, but she was probably right about one thing. Your prior experience is etched in the lines on your hands and your face. But as to where these lines will take you next, the story that Toronto is writing today is likely to provide better guidance than she will—and more information about the priorities to be weighed and measured along the way. 

1.         A Carbon-Free Future

Toronto has initiated two experiments, one is to gradually reduce its carbon footprint to nothing and the other is to build a community from the ground up with the help of data from its new residents. Both experiments are in the early stages, but they provide tantalizing glimpses into the places where we all might be living and working if we commit to the same priorities as Toronto.
 
When I’ve visited this City, it always seemed futuristic to me but not because of its built environment. Instead, it was its remarkably diverse population drawn in large numbers from every corner of the globe. Only later did I learn that over half of Toronto’s population is foreign-born, giving the place a remarkable sense of optimism and new beginnings.
 
Declaring its intention to radically reduce its use of fossil fuels, Toronto has taken a long stretch of King Street, one of the City’s busiest commercial and recreational boulevards, and implemented a multi-faceted plan that bans most private traffic, upgrades the existing streetcar system, concentrates new residential and commercial space along its corridor, and utilizes these densities and proximities to encourage both walking and public transportation for work, school, shopping and play.
 
In contrast to a suburban sprawl of large homes and distant amenities that require driving, Toronto’s urbanized alternative offers smaller living spaces, more contact with other members of the community, far less fuel consumption, and reclaimed spaces for public use that were once devoted to parking or driving. One hope is that people will feel less isolated and lonely as proximity has them bumping into one another more regularly. Another is that residents and workers visiting daily will become more engaged in public life because they’ll need to cooperate in order to share its more concentrated spaces.
 
Toronto’s King Street experiment envisions a time when all of its streets will be “pedestrianized.” There will still be cars, but fewer will be in private hands and those that remain will be rented as needed—anticipating the rise of on-call autonomous vehicles. Streets and roads will also remain, but they will increasingly be paid for by those who use them most, further reducing the need for underutilized roadways and freeing up space for other uses like parks and recreational corridors.
 
Toronto’s experiment in urban living also promotes a “sharing economy,” with prices for nearly everything reduced when the cost is shared with others. Academics like Daniel Hoornweg at the University of Ontario’s Institute of Technology have been particularly interested in using reduced prices to drive the necessary changes. It’s “sharing rides, sharing tools, sharing somebody to look after your dog when you’re not there,” says Hoornweg. Eventually, the sharing economy that started with Uber and Airbnb will become almost second nature as it becomes more affordable and residents exchange their needs to own big homes and cars for other priorities like a sustainable environment, greater access to nature within an urban area, and more engagement over shared pursuits with their neighbors. 
 
For a spirited discussion about Toronto’s King Street experiment that includes some of its strongest boosters, you can listen here to an NPR-Morning Edition segment that was broadcast earlier this week.

2.         Toronto’s Quayside Re-Development

Much like in Philadelphia where I’m writing this post, some of Toronto’s most desireable waterfront areas have been isolated from the rest of its urban center by a multi-lane highway. In response, Toronto has set aside a particularly lifeless area “of rock-strewn parking lots and heaps of construction materials” that’s spread over a dozen acres for the development of another urban experiment, this time in partnership with a “smart-cities” Google affiliate called Sidewalk Labs. In October, a coalition of the City, Ontario and Canadian governments contracted with Sidewalk to produce a $50 million design for a part of town that’s been renamed Quayside, or what Sidewalk calls “the world’s first neighborhood built from the internet up”—a sensor-enabled, highly wired environment that promises to run itself.

According to a recent article in Politico (that you can also listen to), Quayside will be “a feedback-rich” smart city “whose constant data flow [will] let it optimize services constantly” because it is “not only woven through with sensors and Wi-Fi, but [also] shaped around waves of innovation still to come, like self-driving cars.” For example, in keeping with Toronto’s other pay-as-you-go priorities, one of Quayside’s features will be “pay-as-you-throw” garbage chutes that automatically separate out recyclables and charge households for their waste output.

Here are a couple of views of the future development, including tags on some of the promised innovations.

The new Quayside neighborhood in Toronto

A truly smart city runs on data that is generated from its inhabitants and behaviorally informed algorithms instead of on decisions that are made by Sidewalk’s managers or public officials. Not surprisingly, this raises a series of legal and quality of life questions. 

On the legal side, those questions include: who owns the data produced by Sidewalk’s sensors and WiFi monitors; who controls the use of that data after it’s been generated; and whose laws apply when conflicts arise?  On the issue of data privacy (and other potential legal differences), the Politico article notes that there are:

few better places to have this conversation than Canada, a Western democracy that takes seriously debates over informational privacy and data ownership—and is known for managing to stay polite while discussing even hot-button civic issues.

Moreover, because Canadians view personal privacy as a fundamental human right instead of one that can be readily traded for a “free” Gmail account or access to Google’s search engine, Sidewalk has already stipulated that data collected in Quayside will never be used to sell targeted advertising. 
 
Undesirable human impacts from machine decision-making have also been raised, and Sidewalk is hoping to minimize these impacts by asking the City’s residents in advance for their own visions and concerns about Quayside. A year of consultations is already informing the initial plan. 
 
Longer term, urbanists like Arielle Arieff worry about “the gap between what data can and cannot do” when running a neighborhood.  Part of the beauty of city living is the connections that develop “organically”–chance occurrences and random encounters that a database would never anticipate. Arieff says: “They really do believe in their heart and soul that it’s all algorithmically controllable, but it’s not.”  As if to confirm her suspicions, Sidewalk’s lead manager seems equally convinced that today’s technology can “optimize everyone’s needs in a more rational way.” 
 
Given the expertise and perspective Toronto will be gaining from its King Street experiment and its citizens’ sensitivity to human concerns (like privacy) over efficiency concerns (like convenience), there is room for optimism that the City will strike a livable balance with its high tech partner. Moreover, Sidewalk Labs has a significant incentive to get it right in Quayside. There is an adjacent and currently available 800-acre lot known as Port Lands, “a swath of problem space big enough to become home to a dozen new neighborhoods in a growing metropolis.”
 
To me, Toronto’s Quayside experiment seems to have little downside, with more serious issues arising in Sidewalk’s future smart city projects. Sidewalk may not be selling its Toronto data to advertisers, but it will be vastly more knowledgeable than other cities that lack either the rich pools of behavioral data it has accumulated in Toronto or the in-house expertise to interpret it. Among other things, this creates a power imbalance between a well-funded private contractor and underfunded cities that lack the knowledge to understand what they stand to gain or to forge a working partnership they can actually benefit from. Simone Brody, who runs the Bloomberg Philanthropies’ “What Works Cities” project, says: “When it comes to future negotiations, its frightening that Google will have the data and [other] cities won’t.”
 
But these are longer-range concerns, and there is reason today for cautious optimism that American regulators (for example) will eventually begin to treat powerful tech companies that are amassing and utilizing public data more like “utilities” that must serve the public as well as their own profit-driven interests. That kind of intervention could help to level the public-private playing field, but it’s also a discussion for another day. 
 
In the meantime, Toronto’s boldness in experimenting its way to a future that champions its priorities through the latest innovations is truly inspiring. The cities and towns where the rest of us live and work have much to learn from Toronto’s willingness to claim the future it wants by the seat of its pants.  

This post was adapted from my March 17, 2019 newsletter.

Filed Under: *All Posts, Being Part of Something Bigger than Yourself, Building Your Values into Your Work, Continuous Learning, Entrepreneurship, Heroes & Other Role Models Tagged With: boldness, civic leadership, courage, experimentation, innovation, King Street experiment, priorities, problem solving, Quayside, seizing the future, Sidewalk Labs, smart cities, Toronto, vision, work life rewards

New Starting Blocks for the Future of Work

March 10, 2019 By David Griesing Leave a Comment

(picture by Edson Chagas)

As a challenging future rushes towards us, I often wonder whether our democratic values will continue to provide a sound enough foundation for our lives and work.
 
In many ways, this “white-water world” is already here. As framed by John Seely Brown in a post last summer, it confronts us with knowledge that’s simply “too big to know” and a globe-spanning web of interconnections that seems to constantly alter what’s in front of us, like the shifting views of a kaleidoscope.
 
It’s a brave new world that:

– makes a fool out of the concept of mastery in all areas except our ability–or inability–to navigate [its] turbulent waters successfully;
 
– requires that we work in more playful and less pre-determined ways in an effort to keep up with the pace of change and harness it for a good purpose;
 
– demands workplaces where the process of learning allows the tinkerer in all of us “to feel safe” from getting it wrong until we begin to get it right;
 
– calls on us to treat technology as a toolbox for serving human needs as opposed to the needs of states and corporations alone;  and finally,
 
– requires us to set aside time for reflection “outside of the flux” so that we can consider the right and wrong of where we’re headed, commit to what we value, and return to declare those values in the rough and tumble of our work tomorrow.

In the face of these demands, the most straightforward question is whether we will be able to safeguard our personal wellbeing and continue to enjoy a prosperous way of life. Unfortunately, neither of these objectives seems as readily attainable as they once did.
 
When our democratic values (such as freedom and championing individual rights) no longer ensure our wellbeing and prosperity, those values get questioned and eventually challenged in our politics.
 
Last week, I wrote here about the dangerous risks—like addiction and behavioral modification—that our kids and others confront by spending too much screen time playing on-line games like Fortnite. Despite a crescendo of anecdotal evidence about the harms to boys in particular, the freedom-loving (and endlessly distracted) West seems stymied when it comes to deciding what to do about it. On the other hand, China easily moved from identifying the harm to its collective wellbeing to implementing time restrictions on the amount of on-line play. It was the Great Firewall’s ability to intervene quickly that prompted one observer to wonder how those of us in the so-called “first world” will respond to  “the spectacle of a civilisation founded [like China’s] on a very different package of values — but one that can legitimately claim to promote human flourishing more vigorously than their own”?
 
Meanwhile, in a Wall Street Journal essay last weekend, its authors documented the ability of authoritarian countries with capitalist economies to raise the level of prosperity enjoyed by their citizens in recent years. Not so long ago, the allure of West to the “second” and “third worlds” was that prosperity seemed to go hand-in-hand with democratic values and institutions. That conclusion is far less clear today. With rising prosperity in authoritarian nations like China and Vietnam—and the likelihood that there will soon be far more prosperous citizens in these countries than outside of them—the authors fretted that:

It isn’t clear how well democracy, without every material advantage on its side, will fare in the competition [between our very different value systems.]

With growing uncertainty about whether Western values and institutions can produce sufficient benefits for its citizens, and with “the white-water world” where we live and work challenging our navigational skills, it seems a good time to return to some questions that we’ve chewed on here before about “how we can best get ready for the challenges ahead of us.” 
 
Can the ways that we educate our kids (and retrain ourselves) enable us to proclaim our humanity, secure our self-worth, and continue to find a valued place for ourselves in the increasingly complex world of work? 
 
Can championing new teaching methods strengthen democratic values and deliver more of their promise to us in terms of wellbeing and prosperity than it seems we can count on today?
 
Are new and different classrooms the keys to our futures?

1.         You Treasure What You Measure

Until this week, I never considered that widely administered education tests would provide any of these answers—but I probably should have—because in a very real way, we treasure the aptitudes and skills, indeed everything that we take the time to measure. Gross national product, budget and trade deficits, unemployment rates, the 1% versus everyone else: what is most important to us is endlessly calculated, publicized and analyzed. We also value these measures because they help us decide what to do next, like stimulating the economy, cutting government programs, or implementing trade restrictions. Measures influence actions.
 
It’s much the same with the measures we obtain from the educational tests that we administer, and in this regard, no test today is more influential than the Programme for International Student Assessment or PISA. PISA was first given in 2000 in 32 countries, the first time that national education systems were evaluated and could be compared with one another. The test measured 15-year-olds scholastic performance in mathematics, science and reading. No doubt you’ve heard some of the results, including the United States’ disappointing placement in the middle of the international pack. The test is given every three years and in 2018, 79 countries and economies participated in the testing and data collection.
 
According to an article in on-line business journal Quartz this week, “the results…are studied by educators the way soccer fans obsess over the World Cup draw.” 
 
No one thinks more about the power of the PISA test, the information that it generates, and what additional feats it might accomplish than Andreas Schleicher, a German data scientist who heads the education division of the Organisation for Economic Cooperation and Development (OECD) which administers PISA worldwide.

Andreas Schleicher

Schleicher downplays the role that the PISA has played in shaming low performing countries, preferring the test’s role in mobilizing national leaders to care as much about teaching and learning as they do about economic measures like unemployment rates and workplace productivity. At the most basic level, PISA data has supported a range of conclusions, including that class size seems largely irrelevant to the learning experience and that what matters most in the classroom is “the quality of teachers, who need to be intellectually challenged, trusted, and have room for professional growth.”

Schleicher also views the PISA as a tool for liberating the world’s educational systems from their single-minded focus on subjects like science, reading and math and towards the kinds of “complex, interdisciplinary skills and mindsets” that are necessary for success in the future of work. We are afraid that human jobs will be automated but we are still teaching people to think like machines. “What we know is that the kinds of things that are easy to teach, and maybe easy to test, are precisely the kinds of things that are easy to digitize and to automate,” Schleicher says.

To help steer global education out of this rut, he has pushed for the design and administration of new, optional tests that complement the PISA. Change the parameters of the test, change the skills that are measured, and maybe the world’s education-based priorities will change too. Says Schleicher: “[t]he advent of AI [or artificial intelligence] should push us to think harder [about] what makes us human” and lead us to teach to those qualities, adding that if we are not careful, the world’s nations will be continue to educate “second-class robots and not first-class humans.”

Schleicher had this future-oriented focus years before the PISA was initially administered.

In 1997, Schleicher convened a group of representatives from OECD countries, not to discuss what could be tested, but what should be tested. The idea was to move beyond thinking about education as the driver of purely economic outcomes. In addition to wanting a country’s education system to provide a ready workforce, they also wondered whether they could nurture young people to help to make their societies more cohesive and democratic while reducing unfairness and inequality. According to Quartz:

The group identified three areas to explore: relational, or how we get along with others; self, how we regulate our emotions and motivate ourselves, and content, what schools need to teach.

Instead of simply enabling students to respond to the demands of a challenging world, Schleicher and others in his group wanted national testing to encourage the kinds of skill building that would enable young people to change the world they’d be entering for the better.   

Towards this end, Schleicher’s team began to develop assessments for independent thinking and the kinds of personal skills that contribute to it. The technology around test administration enabled the testers to see how students solved problems in real time, not simply whether they get them right or wrong. They gathered and shared data that enabled national education systems to “help students learn better and teachers teach better and schools to become more effective.”  Assessments of the skill sets around independent thinking encouraged countries to begin to see new possibilities and want to change how students learn in their classrooms. “If you don’t have a north star [like this], perhaps you limit your vision,” he says.

For the past twenty years, Schleicher’s north stars have also included students’ quest to find meaning in what they are doing and to exercise their agency in determining what and how they learn. He is convinced that people have the “capacity to imagine and build things of intrinsic positive worth.”  We have skills that robots cannot replace, like managing between black and white, integrating knowledge, and applying knowledge in unique situations. All of those skills can be tested (and encouraged), along with the skill that is most unique about human beings, namely:

our capacity to take responsibility, to mobilize our cognitive and social and emotional resources to do something that is of benefit to society. 

What Schleicher and his testing visionaries began to imagine in 1997 have been gradually introduced as optional tests that focus on problem-solving, collaborative problem-solving, and most recently, so-called “global competencies” like open-mindedness and the desire to improve the world. In 2021, another optional test will assess flexibility in thinking and habits of creativity, like being inquisitive and persistent.

One knowledgeable observer of these initiatives, Douglas Archibald, credits Schleicher with “dramatically elevating” the discussion about the future of education. “There is no one else bringing together people in charge of these educational systems to seriously think about how their systems [can be] future proofed,” says Archibald. But he and others also see a hard road ahead for Schleicher, with plenty of resistance from within the global education community.   

Some claim that he is asking more from a test than he should. Others claim his emphasis is fostering an over-reliance on testing over other priorities. Regarding the “global competencies” assessment for example, 40 of the 79 participating countries opted not to administer it. But Schleicher, much like visionaries in other fields, remains undaunted. Nearly half of the countries are exercising their option to assess “global competencies” and even more are administering the other optional tests that Schleicher has helped develop. Maybe educators are slowly becoming convinced that the threat to human work in a white-water world is too serious to be ignored any longer.

A view from Kenneth Robinson’s presentation: “Changing Education Paradigms”

While Schleicher and his allies are in the vanguard of those who are using a test to prompt a revolution in education, they are hardly the only ones to challenge a teaching model that, for far too long, has only sought to produce a dependable, efficient and easily replaceable workforce. The slide above is from Sir Kenneth Robinson’s much-heralded (and well-worth your taking a look at) 2010 video called “Changing Education Paradigms.” In it, he also champions teaching that enables uniquely human contributions that no machine can ever replace.
 
Schleicher, Robinson and others envision education systems that prepare young people (or re-engineer older ones) for a complex and ever shifting world where no one has to be overwhelmed by the glut of information or the dynamics of shifting networks but can learn how to navigate today’s challenges productively. They highlight and, by doing so, champion teaching methods that help to prepare all of us for jobs that provide meaning and a sense of wellbeing while amplifying and valuing our uniquely human contributions.

Schleicher is also helping to modify our behavior by championing skills like curiosity about others and empathy that can make us more engaged members of our communities and commit us to improving them. Assessing these skills in national education tests says both loudly and clearly that these skills are important for human flourishing too. Indeed, this may be Schleicher’s and OECD’s most significant contribution. Their international testing is encouraging the skills and changes in behavior that can build better societies, whether they are based on the democratic values of the West or the more collective and less individual ones of the East. 

That is no small thing. No small thing at all.

This post is adapted from my March 10, 2019 newsletter.

Filed Under: *All Posts, Being Part of Something Bigger than Yourself, Building Your Values into Your Work, Continuous Learning Tagged With: Ai, Andreas Schleicher, artificial intelligence, automation, democratic values, education, education testing, human flourishing, human work, OECD, PISA, Programme for International Student Assessment, skills assessment, values, work, workforce preparation

Whose Values Will Save Us From Our Technology?

March 3, 2019 By David Griesing Leave a Comment

When travelers visited indigenous people in remote places, they often had problems when they took out their cameras and wanted to take some pictures. The locals frequently turned away, in what was later explained as a fear of having their souls stolen by the camera’s image. 

After all, who wants to let a stranger take your soul away? 

Indigenous people have an interesting relationship between their experience and their representations of it. The pictures here are of embroideries made by the Kuna people who live on remote islands off the coast of Panama. I don’t know what these abstractions mean to the Kuna, but I thought they might stand in here for representations of something that’s essential to them, like their souls.

Technology today is probing—with the aim of influencing, for both good and bad purposes—the most human and vital parts of us. For years, the alarm bells have been going off about the impact on our kids of video games (and more recently their on-line multi-player successors), because kids are really “each of us” at our most impressionable.  Unlike many indigenous people however, most kids are blissfully unaware that somebody might be stealing something from them or that their behavior is being modified while they are busy having fun.  And it’s not just kids today who become engaged and vulnerable to those behind their screens whenever they’re playing, shopping or exploring on-line.

Because of their ability to engage and distract, the tightly controlled environments of on-line games provide behavioral scientists and the others who are watching us play them a near perfect Petri dish in which to study the choices we make in the game’s virtual world and how they can influence them. As a result of the data being gathered, the “man behind the curtain” has become increasingly adroit at deepening our engagement, anticipating our behavior once we’re “on the hook,” manipulating what we do next, and ensuring that we stay involved in the game as long as he wants us to.

Games benefit their hosts by selling stuff to players, mostly through ads and marketing play-related gear. Game hosts are also selling the behavioral information they gather from players. Their playbook is the same as Facebook’s:  monetizing behavioral data about its users by selling that data to whoever wants to influence our decision-making in all aspects of our lives and work.

When it comes to gathering particularly rich pools of behavioral data, two games today are noteworthy. Fortnite has become one of the most successful games in history in what seems like a matter of months, while Tetris, one of the first on-line games, has recently been updated for an even wider demographic. This week’s stories about them illustrate the potency of their interactions with players; the precision of the existing behavioral data that these games have utilized; the even more refined behavioral data they are currently gathering on players; and the risks of addiction and other harms to the humans who are playing them.

These stories amount to a cautionary tale, affecting not just the games we play but all of our screen-facing experiences. Are we, as citizens of the enlightened, freedom-loving West, equipped to “save our souls” from these aspiring puppet-masters or is the paternalistic, more harmony-seeking East (particularly China with its great firewalls and social monitors) finding better solutions? It’s actually a fairly simple question— 

Whose values will save us from our technology?

1.            The Rise of Fortnite and Tetris

Five years or an on-line lifetime ago, I wrote about “social benefit games” like WeTopia, wondering whether they could enable us to exercise our abilities to act in “pro-social” or good ways on-line before we went out to change the world. I also worried at the time about the eavesdroppers behind our screens who were studying us while we did so.
 
I heralded their enabling quality, their potential for helping us to behave better, in a post called Game Changer:

The repetitive activities in this virtual world didn’t feel like rote learning because the over-and-over-again was embedded in the diversions of what was, at least at the front end, only a game. Playing it came with surprises  (blinking “opportunities” and “limited time offers”), cheerful reminders (to water my “giving tree” or harvest my carrots), and rewards from all of the “work” I was doing (the “energy,” “experience,” and “good will” credits I kept racking up by remembering to restock Almanzo’s store or to grow my soccer-playing community of friends).

What I’m wondering is whether this kind of immersive on-line experience can change real world behavior.
 
We assume that the proverbial rat in this maze will learn how to press the buzzer with his little paw when the pellets keep coming.
 
Will he (or she) become even more motivated if he can see that a fellow rat, outside his maze, also gets pellets every time he presses his buzzer?
 
And what happens when he leaves the maze?
 
Is this really a way to prepare the shock troops needed to change the world?

In a second post, I looked at what “the man behind the curtain” was doing while the game’s players were learning how to become better people:

He’s a social scientist who has never had more real time information about how and why people behave in the ways that they do (not ever!) than he can gather today by watching hundreds, sometimes even millions of us play these kinds of social games.
 
Why you did one thing and not another. What activities attracted you and which ones didn’t. What set of circumstances got you to use your credit card, or to ask your friends to give you a hand, or to play for 10 hours instead of just 10 minutes.
 
There’s a lot for that man to learn because, quite frankly, we never act more naturally or in more revealing ways than when we’re at play.

In my last post back then, I concluded that there were both upsides and downsides to these kinds of on-line game experiences and agreed to keep an open mind. I still think so, but am far more pessimistic about the downsides today than I was all those years ago. 
 
Fortnite may be the most widely played on-line game ever.  As a hunt/kill your enemies/celebrate your victories kind of adventure, it is similar to hundreds of other on-line offerings. Like them, it has also drawn far more boys than girls as players. What distinguishes the Fortnite experience is the behavioral data that has informed it.
 
In a recent Wall Street Journal article, “How Fortnite Triggered an Unwinnable War Between Parents and Their Boys,” the game’s success is due to the incorporation of existing behavioral data in its base programming and simultaneous machine learning while the game is afoot. Dr Richard Freed, psychologist and author of “Wired Child: Reclaiming Childhood in a Digital Age” says that Fortnite has combined so many “persuasive design elements” (around 200) that it is the talk among his peers. “Something is really different about it,” he said.
 
Ofir Turel who teaches at Cal State Fullerton and researches the effects of social media and gaming, talked about one of those persuasive design elements, namely how Fortnite players experience the kind of random and unpredictable rewards that keep their casino counterparts glued to their seats for hours in front of their slot machines. (Fifty years ago, behaviorist B.F. Skinner found that variable, intermittent rewards were more effective than a predictable pattern of rewards in shaping the habit-forming behavior of pigeons.) “When you follow a reward system that’s not fixed, it messes up our brains eventually,” said Turel. With games like Fortnite, he continues: “We’re all pigeons in a big human experiment.”
 
The impact of these embedded and evolving forms of persuasion were certainly compelling for the boys who are profiled in this article. Toby is a representative kid who wants to play all of the time and whose behavior seems to have changed as a result:

Toby is ‘typically a nice kid,’ his mother said. He is sweet, articulate, creative, precocious and headstrong—the kind of child who can be a handful but whose passion and curiosity could well drive him to greatness. [In other words, the perfect Wall Street Journal reader’s pre-teen.]
 
Turn off Fortnite [however], and he can scream, yell and call his parents names. Toby gets so angry that his parents impose ‘cooling off’ periods of as long as two weeks. His mother said he becomes less aggressive during those times. The calming effect wears off after Fortnite returns.
 
Toby’s mother has tried to reason with him. She has also threatened boarding school. ‘We’re not emotionally equipped to live like this,’ she tells him. ‘This is too intense for the other people living here.’

Of course, it could be years before psychologists and other researchers study larger samples of boys playing Fortnite and report their findings.
 
In the meantime, there was also a story about Tetris in the newspapers this week. Some of you may remember the game from the era of Pokeman. (Essentially, you attempt to navigate blocks or clusters of blocks into the spaces in a container on the screen below, where you drop them in.)  How could a simple, time consuming, 1980’s era diversion like this be harmful, you ask? This time it seems to embed an endless desire for distraction instead of aggression.
 
A 1994 article in Wired magazine identified something called “the Tetris Effect.” Long after they had played the game, many players reported seeing floating blocks in their minds or imaging real-life objects fitting together. At the time, Wired suggested that the game’s effect on the brain made it a kind of “electronic drug” or “pharmatronic.” The random sample of players in this week’s article added further descriptors to the Tetris Effect.   
 
One player named Becky Noback says:

You get the dopamine rush from stacking things up in a perfect order and having them disappear—all that classic Tetris satisfaction. It’s like checking off a to-do list. It gives you this sense of accomplishment.

Another player, Jeremy Ricci, says:

When everything’s clicking, and you’re dropping your pieces, you get into this trancelike rhythm. It’s hard to explain, but when you’re in virtual-reality mode, there’s things going beneath you, and to the side of you, and the music changes, and you’re getting all those elements at the same time, you get a surreal experience.

The article recounts that a Twitter search of both “Tetris Effect” and “cry” or  “tears” will uncover tweets where players are wondering: “Why am I tearing up during a Tetris game?” testifying to the game’s deep emotional resonance. Reviewers of the newest version of the game (out in December) have also called it “spiritual,” “transcendental,” and “an incredible cosmic journey.”
 
What is prompting these reactions? While the basic game is the same as ever, the newest version surrounds the block-dropping action with fantastical environments, ambient new age music, and, occasionally, a soothing lyric like “what could you be afraid of?” Add virtual reality goggles, and a player can float in space surrounded by stars while luminescent jellyfish flutter by and mermaids morph into dolphins. When a player drops in his or her blocks, the audio pulses gently and the hand-held controller vibrates.  While the new version of the game may seem aimed at seekers of “trippy experiences,” in fact it is being marketed as an immersive stress-relieving diversion. It is here where Tetris aims for a market beyond the usually game-playing crowd, which skews younger and more male. (Think of all those new pigeons!). Almost everyone wants a stress reliever after all.
 
You can see a preview of the new Tetris for yourself (with or without your virtual reality headsets) via this link.
 
These early assessments of Fortnite and Tetris only provide anecdotal evidence, but we seem to be entering a strange new world where a game’s interface and those gathering information and manipulating our behavior behind it have taken over more of our attention, healthy detachment, and ability to think for ourselves.

Another one from the Kuna people, San Blas Islands, Panama

2.            Go East or Go West?

In “Fortnite Addiction: China Has the Answer,” David Mattin discusses China’s assessment of the problem, the solution that its social monitors are implementing today, and why their approach just might make sense for us too. Mattin is the Global Head of Trends and Insights (a great job title!) at TrendWatching and sits on the World Economic Forum’s Global Futures Council. He put up his provocative post on Medium recently.

It’s been widely reported that China is rolling out what Mattin calls “an unprecedented, tech-fueled experiment in surveillance and social control.” Under this system, citizens are assigned a “social credit rating” that scores each citizen’s worth as a citizen.  It is a system that seems shocking to us in the West, but it follows centuries of maintaining a social order based on respect for elders (from the “celestial kingdom of rulers” on down) and a quest for harmony in the community as a whole.  The Chinese government intends to have a compulsory rating system in place nationwide by 2020. Individuals with low social credit scores have already been denied commercial loans, building permits and school admissions. Mattin reports that low scorers have also been blocked from taking 11.4 million flights and 4.2 million train rides. This system is serious about limiting personal freedoms for the sake of collective well being.  

Like many of us, China’s monitors have become alarmed by reports of Fortnite addiction and Tencent, the world’s sixth largest internet company, recently started using camera-enabled facial recognition technology to restrict the amount of time that young people play a multi-player video game called Honor of Kings that’s similar to Fortnite.  As in the West, the concern is about the impact of these gaming technologies on young people’s developing brains and life prospects.

Under government pressure, Tencent first trialled the new system back in October. Now they’ve announced they’ll implement facial recognition-based age restrictions on all their games in 2019. Under 12s will be locked out after one hour per day; 13–18-year olds are allowed two hours. And here’s the crucial detail: the system is fuelled by the national citizen database held by China’s Ministry of Public Security. Yes, if you’re playing Honor of Kings in China now, you’re being watched via your webcam or phone camera and checked against a vast government database of headshots.

Because we can’t imagine it happening here doesn’t mean it’s the wrong approach, and in the most interesting part of his argument, Mattin partially explains why.
 
He begins with the observation that priorities in Chinese society and the trade-offs they’re making are different—something that’s hard (if not impossible) for most of us Westerners, overtaken by our feelings of superiority, to understand.  
 
– “Enlightenment liberal values are not  the only way to produce a successful society” and 
 
– “value judgments are trade-offs; you can have a bit more of one good if you tolerate having a bit less of another.”
 
Indeed, it is how all ethical systems work; some values are always more important than others in decision-making. Mattin drives his point home this way:

What western liberal democrats have never had to countenance seriously is the idea that theirs are not the only values mandated by reason and morality; that they’re not the universal end point of history and the destination for all successful societies. In fact, they’re just some values among many others that are equally valid. And choosing between ultimate values is always a case of trading some off against others.

Mattin rubs it in even further with an uncomfortable truth and a question for additional pondering.  
 
For us, as the supposed champions of freedom and the rights of every individual, the uncomfortable truth is that every single day “we’re actively deciding (albeit unconsciously) that we hold other values — such as convenience and distraction — above that of individual liberty.” So it is not really government interference with our freedoms that we reject; it is anyone else’s right to interfere with our convenience or right to be distracted as much as we want anytime that we want. Of course, I’ve been making a similar argument when urging that new restraints be placed on tech giants like Facebook, Amazon and Google. However uncomfortable it is to hear, our insatiable desires for convenience and distraction are simply not more important than preventing the harms these companies are causing to our political process, privacy rights, and competitive markets. Even so, in the U.S. at least, we seem to making very little progress in any of these areas because we supposedly stand for freedom.
 
Mattin’s what-if question to mull over is as follows. What if the evidence mounts that excessive screen time playing games and otherwise really is damaging young people’s (or maybe everyone’s) minds and that China’s government-imposed time restrictions really do limit the damage? How will the West respond to “the spectacle of a civilisation founded on a very different package of values — but one that can legitimately claim to promote human flourishing more vigorously than their own”?

3.            Whose Values Should Help Us To Decide?

I have a few additional questions of my own.
 
If empowering a Big Brother (or Sister) like China’s in the West is unpalatable, how can a distracted public that is preoccupied by its conveniences be roused enough to counter tech-related harms with democratically determined cures?
 
Do we need to be confronted by an epidemic of anti-social gamers before we act?  Since an epidemic of opioid addiction and “deaths of despair” hasn’t roused the citizenry (or its elected representatives) to initiate a meaningful response, why would it be any different here?
 
Even if we had The Will to pursue solutions, could safety nets ever be put into place quickly enough to protect the generations that are playing Fortnite, Tetris and other games today? After all, democracy is cumbersome, time-consuming.
 
And continuing this thought, can democratic governments ever hope to “catch up to” and protect their citizens from rapidly evolving and improving technologies with troubling human impacts? Won’t the men and women behind our screens always be ahead of a non-authoritarian government’s ability to constrain them?
 
I hope you’ll let me know what you think the answers might be.

This post is adapted from my March 3, 2019 newsletter.

Filed Under: *All Posts, Being Part of Something Bigger than Yourself, Work & Life Rewards Tagged With: behavioral data collection, behavioral manipulation through on-line games, Chinese values, David Mattin, Enlightenment values, Fortnite, on-iine games, persuasive design elements, tech, technology, Tetris, values, video games

  • « Previous Page
  • 1
  • …
  • 17
  • 18
  • 19
  • 20
  • 21
  • …
  • 48
  • Next Page »

About David

David Griesing (@worklifeward) writes from Philadelphia.

Read More →

Subscribe to my Newsletter

Join all the others who have new posts, recommendations and links to explore delivered to their inboxes every week. Please subscribe below.

David Griesing Twitter @worklifereward

My Forthcoming Book

WordLifeReward Book

Search this Site

Recent Posts

  • The Democrat’s Near-Fatal “Boys & Men” Problem June 30, 2025
  • Great Design Invites Delight, Awe June 4, 2025
  • Liberating Trump’s Good Instincts From the Rest April 21, 2025
  • Delivering the American Dream More Reliably March 30, 2025
  • A Place That Looks Death in the Face, and Keeps Living March 1, 2025

Follow Me

David Griesing Twitter @worklifereward

Copyright © 2025 David Griesing. All Rights Reserved.

  • Terms of Use
  • Privacy Policy