David Griesing | Work Life Reward Author | Philadelphia

  • Blog
  • About
    • Biography
    • Teaching and Training
  • Book
    • WorkLifeReward
  • Newsletter Archive
  • Contact
You are here: Home / Archives for surveillance capitalism

Citizens Will Decide What’s Important in Smart Cities

July 8, 2019 By David Griesing Leave a Comment

The norms that dictate the acceptable use of artificial intelligence in technology are in flux. That’s partly because the AI-enabled, personal data gathering by companies like Google, Facebook and Amazon has caused a spirited debate about the right of privacy that individuals have over their personal information. With your “behavioral” data, the tech giants can target you with specific products, influence your political views, manipulate you into spending more time on their platforms, and weaken the control that you have over your own decision-making.
 
In most of the debate about the harms of these platforms thus far, our privacy rights have been poorly understood.  In fact, our anything-but-clear commitments to the integrity of our personal information have enabled these tech giants to overwhelm our initial, instinctive caution as they seduced us into believing that “free” searches, social networks or next day deliveries might be worth giving them our personal data in return. Moreover, what alternatives did we have to the exchange they were offering?

  • Where were the privacy-protecting search engines, social networks and on-line shopping hubs?
  • Moreover, once we got hooked on to these data-sucking platforms, wasn’t it already too late to “put the ketchup back in the bottle” where our private information was concerned? Don’t these companies (and the data brokers that enrich them) already have everything that they need to know about us?

Overwhelmed by the draw of  “free” services from these tech giants, we never bothered to define the scope of the privacy rights that we relinquished when we accepted their “terms of service.”  Now, several years into this brave new world of surveillance and manipulation, many feel that it’s already too late to do anything, and even if it weren’t, we are hardly willing to relinquish the advantages of these platforms when they are unavailable elsewhere. 
 
So is there really “no way out”?  
 
A rising crescendo of voices is gradually finding a way, and they are coming at it from several different directions.
 
In places like Toronto (London, Helsinki, Chicago and Barcelona) policy makers and citizens alike are defining the norms around personal data privacy at the same time that they’re grappling with the potential fallout of similar data-tracking, analyzing and decision-making technologies in smart-city initiatives.
 
Our first stop today is to eavesdrop on how these cities are grappling with both the advantages and harms of smart-city technologies, and how we’re all learning—from the host of scenarios they’re considering—why it makes sense to shield our personal data from those who seek to profit from it.  The rising debate around smart-city initiatives is giving us new perspectives on how surveillance-based technologies are likely to impact our daily lives and work. As the risks to our privacy are played out in new, easy-to-imagine contexts, more of us will become more willing to protect our personal information from those who could turn it against us in the future.
 
How and why norms change (and even explode) during civic conversations like this is a topic that Cass Sunstein explores in his new book How Change Happens. Sunstein considers the personal impacts when norms involving issues like data privacy are in flux, and the role that understanding other people’s priorities always seems to play. Some of his conclusions are also discussed below. As “dataveillance” is increasingly challenged and we contextualize our privacy interests even further, the smart-city debate is likely to usher in a more durable norm regarding data privacy while, at the same time, allowing us to realize the benefits of AI-driven technologies that can improve urban efficiency, convenience and quality of life.
 
With the growing certainty that our personal privacy rights are worth protecting, it is perhaps no coincidence that there are new companies on the horizon that promise to provide access to the on-line services we’ve come to expect without our having to pay an unacceptable price for them.  Next week, I’ll be sharing perhaps the most promising of these new business models with you as we begin to imagine a future that safeguards instead of exploits our personal information. 

1.         Smart-City Debates Are Telling Us Why Our Personal Data Needs Protecting

Over the past 6 months, I’ve talked repeatedly about smart-city technologies and one of you reached out to me this week wondering:  “What (exactly) are these new “technologies”?”  (Thanks for your question, George!).  
 
As a general matter, smart-city technologies gather and analyze information about how a city functions, while improving urban decision-making around that new information. Throughout, these data-gathering,  analyzing, and decision-making processes rely on artificial intelligence. In his recent article “What Would It Take to Help Cities Innovate Responsibly With AI?” Eddie Copeland begins by describing the many useful things that AI enables us to do in this context: 

AI can codify [a] best practice and roll it out at scale, remove human bias, enable evidence-based decision making in the field, spot patterns that humans can’t see, optimise systems too complex for humans to model, quickly digest and interpret vast quantities of data and automate demanding cognitive activities.

In other words, in a broad range of urban contexts, a smart-city system with AI capabilities can make progressively better decisions about nearly every aspect of a city’s operations by gaining an increasingly refined understanding of how its citizens use the city and are, in turn, served by its managers.
 
Of course, the potential benefits of greater or more equitable access to city services as well as their optimized delivery are enormous. Despite some of the current hew and cry, a smart-cities future does not have to resemble Big Brother. Instead, it could liberate time and money that’s currently being wasted, permitting their reinvestment into areas that produce a wider variety of benefits to citizens at every level of government.
 
Over the past weeks and months, I’ve been extolling the optimism that drove Toronto to launch its smart-cities initiative called Quayside and how its debate has entered a stormy patch more recently. Amidst the finger pointing among Google affiliate Sidewalk Labs, government leaders and civil rights advocates, Sidewalk (which is providing the AI-driven tech interface) has consistently stated that no citizen-specific data it collects will be sold, but the devil (as they say) remains in the as-yet to be disclosed details. This is from a statement the company issued in April:

Sidewalk Labs is strongly committed to the protection and privacy of urban data. In fact, we’ve been clear in our belief that decisions about the collection and use of urban data should be up to an independent data trust, which we are proposing for the Quayside project. This organization would be run by an independent third party in partnership with the government and ensure urban data is only used in ways that benefit the community, protect privacy, and spur innovation and investment. This independent body would have full oversight over Quayside. Sidewalk Labs fully supports a robust and healthy discussion regarding privacy, data ownership, and governance. But this debate must be rooted in fact, not fiction and fear-mongering.

As a result of experiences like Toronto’s (and many others, where a new technology is introduced to unsuspecting users), I argued in last week’s post for longer “public ventilation periods” to understand the risks as well as rewards before potentially transformative products are launched and actually used by the public.
 
In the meantime, other cities have also been engaging their citizens in just this kind of information-sharing and debate. Last week, a piece in the New York Times elaborated on citizen-oriented initiatives in Chicago and Barcelona after noting that:

[t]he way to create cities that everyone can traverse without fear of surveillance and exploitation is to democratize the development and control of smart city technology.

While Chicago was developing a project to install hundreds of sensors throughout the city to track air quality, traffic and temperature, it also held public meetings and released policy drafts to promote a City-wide discussion on how to protect personal privacy. According to the Times, this exchange shaped policies that reduced, among other things, the amount of footage that monitoring cameras retained. For its part, Barcelona has modified its municipal procurement contracts with smart cities technology vendors to announce its intentions up front about the public’s ownership and control of personal data.
 
Earlier this year, London and Helsinki announced a collaboration that would enable them to share “best practices and expertise” as they develop their own smart-city systems. A statement by one driver of this collaboration, Smart London, provides the rationale for a robust public exchange:

The successful application of AI in cities relies on the confidence of the citizens it serves.
 
Decisions made by city governments will often be weightier than those in the consumer sphere, and the consequences of those decisions will often have a deep impact on citizens’ lives.
 
Fundamentally, cities operate under a democratic mandate, so the use of technology in public services should operate under the same principles of accountability, transparency and citizens’ rights and safety — just as in other work we do.

To create “an ethical framework for public servants and [a] line-of-sight for the city leaders,” Smart London proposed that citizens, subject matter experts, and civic leaders should all ask and vigorously debate the answers to the following 10 questions:

  • Objective– why is the AI needed and what outcomes is it intended to enable?
  • Use– in what processes and circumstances is the AI appropriate to be used?
  • Impacts– what impacts, good and bad, could the use of AI have on people?
  • Assumptions– what assumptions is the AI based on, and what are their iterations and potential biases?
  •  Data– what data is/was the AI trained on and what are their iterations and potential biases?
  • Inputs– what new data does the AI use when making decisions?
  • Mitigation– what actions have been taken to regulate the negative impacts that could result from the AI’s limitations and potential biases?
  • Ethics– what assessment has been made of the ethics of using this AI? In other words, does the AI serve important, citizen-driven needs as we currently understand those priorities?
  • Oversight– what human judgment is needed before acting on the AI’s output and who is responsible for ensuring its proper use?
  • Evaluation– how and by what criteria will the effectiveness of the AI in this smart-city system be assessed and by whom?

As stakeholders debate these questions and answers, smart-city technologies with broad-based support will be implemented while citizens gain a greater appreciation of the privacy boundaries they are protecting.
 
Eddie Copeland, who described the advantages of smart-city technology above, also urges that steps beyond a city-wide Q&A be undertaken to increase the awareness of what’s at stake and enlist the public’s engagement in the monitoring of these systems.  He argues that democratic methods or processes need to be established to determine whether AI-related approaches are likely to solve a specific problem a city faces; that the right people need to be assembled and involved in the decision-making regarding all smart-city systems; and that this group needs to develop and apply new skills, attitudes and mind-sets to ensure that these technologies maintain their citizen-oriented focus. 
 
As I argued last week, the initial ventilation process takes a long, hard time. Moreover, it is difficult (and maybe impossible) to conduct if negotiations with the technology vendor are on-going or that vendor is “on the clock.”
 
Democracy should have the space and time to be a proactive instead of reactive whenever transformational tech-driven opportunities are presented to the public.

(AP Photo/David Goldman)

2.         A Community’s Conversation Helps Norms to Evolve, One Citizen at a Time

I started this post with the observation that many (if not most) of us initially felt that it was acceptable to trade access to our personal data if the companies that wanted it were providing platforms that offered new kinds of enjoyment or convenience. Many still think it’s an acceptable trade. But over the past several years, as privacy advocates have become more vocal, leading jurisdictions have begun to enact data-privacy laws, and Facebook has been criticized for enabling Russian interference in the 2016 election and the genocide in Myanmar, how we view this trade-off has begun to change.  
 
In a chapter of his new book How Change Happens, legal scholar Cass Sunstein argues that these kinds of widely-seen developments:

can have a crucial and even transformative signaling effect, offering people information about what others think. If people hear the signal, norms may shift, because people are influenced by what they think other people think.

Sunstein describes what happens next as an “unleashing” process where people who never formed a full-blown preference on an issue like “personal data privacy (or were simply reluctant to express it because the trade-offs for “free” platforms seemed acceptable to everybody else), now become more comfortable giving voice to their original qualms. In support, he cites a remarkable study about how a norm that gave Saudi Arabian husbands decision-making power over their wives’ work-lives suddenly began to change when actual preferences became more widely known.

In that country, there remains a custom of “guardianship,” by which husbands are allowed to have the final word on whether their wives work outside the home. The overwhelming majority of young married men are privately in favor of female labor force participation. But those men are profoundly mistaken about the social norm; they think that other, similar men do not want women to join the labor force. When researchers randomly corrected those young men’s beliefs about what other young men believed, they became far more willing to let their wives work. The result was a significant impact on what women actually did. A full four months after the intervention, the wives of men in the experiment were more likely to have applied and interviewed for a job.

When more people either speak up about their preferences or are told that others’ inclinations are similar to theirs, the prevailing norm begins to change.
 
A robust, democratic process that debates the advantages and risks of AI-driven, smart city technologies will likely have the same change-inducing effect. The prevailing norm that finds it acceptable to exchange our behavioral data for “free” tech platforms will no longer be as acceptable as it once was. The more we ask the right questions about smart-city technologies and the longer we grapple as communities with the acceptable answers, the faster the prevailing norm governing personal data privacy will evolve.  
 
Our good work of citizens is to become more knowledgeable about the issues and to champion what is important to us in dialogue with the people who live and work along side of us. More grounds for protecting our personal information are coming out of the smart-cities debate and we are already deciding where new privacy lines should be drawn around us. 

This post was adapted from my July 7, 2019 newsletter. When you subscribe, a new newsletter/post will be delivered to your inbox every Sunday morning.

Filed Under: *All Posts, Being Part of Something Bigger than Yourself, Building Your Values into Your Work, Continuous Learning Tagged With: Ai, artificial intelligence, Cass Sunstein, dataveillance, democracy, how change happens, norms, personal data brokers, personal privacy, privacy, Quayside, Sidewalk Labs, smart cities, Smart City, surveillance capitalism, Toronto, values

The Social Contract Around Our Work Is Broken

April 23, 2019 By David Griesing 1 Comment

A growing part of the American economy—the part that’s harvesting and utilizing our personal data to drive what we consume—no longer depends on “the basic reciprocities” that once supported our social contract. In other segments of our economy, business is also profiting at worker’s expense and democratic capitalism’s promises to us about shared prosperity are regularly broken.
 
The mutual benefits of a capitalist economy were supposed to include our thriving as workers, being fairly compensated for our work and able to support our families and communities, while our employers also thrived when we used our paychecks to buy their goods and services. That virtuous circle has been the bedrock of capitalism’s social contract since Adam Smith first described it 300 years ago.
 
Today, its bonds are weakened, if not altogether broken.
 
A leading edge of the breakdown is tech platforms harvesting our personal data “for free” while selling it to others who use it to drive our decisions about what we consume.  In what’s been called “surveillance capitalism,” we’re no longer valued at the front end of the exchange for what we provide (in this instance, our information). Instead our only value is at the back-end, determined by how much the companies that utilize our data can manipulate us into buying whatever they’re selling.  
 
In this growing segment of our economy, largely exploitative exchanges have already replaced mutually beneficial ones. In addition to not paying us for our information, this economic model creates very few jobs in a departure from the consumer-oriented companies of the past. Its failure to value what we’re providing as workers and consumers relative to the enormous profits its trophy companies are reaping undermines both the health of the economy and the democratic institutions that depend on it.  
 
In our economy’s more traditional jobs, we are also losing out today when it comes to the fair exchange of our work for its supposed benefits. A broader stagnation in the American economy results when the benefits that companies gain from pro-business policies fail to “trickle down” and benefit the vast majority of workers who lack the financial security to also be shareholders in these same companies. The result is a yawning wealth gap between the 1% (or, perhaps more accurately, the top 10%) and every other American.
 
Communities break down both economically and politically when we’re not compensated adequately for the work and information that we provide. What were supposed to be “a series of mutual benefit equations” between workers and employers, consumers and companies that sell us things, have become increasingly unbalanced.

The first discussion today looks at this breakdown in the social contract. The second part argues for a shift in priorities that can confront the perils of surveillance capitalism along with other distortions—like income inequality and stagnant growth—that harm all but a small percentage of those who participate in America’s economy today.
 
Instead of more failed attempts to increase economic opportunity through pro-business polices or to limit the harms of this approach with band aids for those it leaves behind, a far better alternative is promoting work for all who are willing to do it, while making the dignity of work (and the thriving families and communities that good work produces) our priorities. Rebalancing the economic equation for workers and consumers will enable the economy to benefit nearly everyone again while mending vital parts of America’s promise.
 
I took the pictures here in Germantown, a nearby “town” in Philadelphia where the Revolutionary War battle took place. Three centuries ago, America’s democratic capitalism began in places like Germantown. In the fabric of its old and repurposed buildings, it’s not difficult to find a metaphor when you’re looking for one.
 
In the side of one old factory, there is a bricked-in wall where there used to be a workroom. In the future of our work, I’d argue that bricked-over workrooms like this, where we used to benefit from our contributions as workers and consumers, need to be opened up and revitalized. We need to call out our increasingly feudal system for what it is, and reorient our priorities to restore basic economic relationships that are foundation stones for our way of life.

The Fundamental Breakdown

In a post from January, I discussed the arguments that Oren Cass makes in his new book The Once and Future Worker about how the mutually beneficial relationships between workers, consumers and businesses have broken down since the 1970s and our repeated failures to address the imbalance.  As I said at the time:

[Cass] is concerned about the vast majority of urban, suburban and rural workers who are not sharing in America’s prosperity because of policy choices that have been made over the past 50 years by “the Left” (for more government spending on safety nets) and “the Right” (for its insistence on driving [business profits] over every other priority). Putting expensive band-aids on the victims of pro-growth government policies—when we could simply be making better choices—is hardly a sustainable way forward in Cass’s view.

Cass argues that propping up business to create a bigger pie for all has been a failure because those bigger slices are being eaten almost exclusively by business owners and their investors as opposed to their workers, their communities, or the economy at large. To counter this result, Cass wants policy makers to adopt entirely different priorities than the Right and Left have been embracing, namely, active, sustained promotion of “a labor market in which workers can support strong families and communities [as] the central determinant of long term prosperity.” Several of his proposals about how to do so, along with his views about the dignity of work and its importance to democracy, are set out in that earlier post.

Cass’s conclusion (and mine) is that America needs to change its economic priorities before the costs of failure get any worse.

In another new book, The Age of Surveillance Capitalism, Shoshana Zuboff focuses on a leading edge of the current problem: the stark imbalance in “behavioral futures markets” where data about what we “are likely to want next” has tremendous value to companies selling us products and services but which no one has been paying us to provide. For Zuboff, these tech platforms, along with the marketers and sellers who buy our behavioral information, have created “a new economic order that claims human experience as free raw material” while implementing “a parasitic economic logic in which the production of goods and services is subordinated to a new global architecture of behavioral modification.” If the industry players can seduce you into giving enough information about your motivations and desires to your smart phones, smart speakers, social networks and search engines, they can persuade you to buy (or do) almost anything. 

Zuboff discusses how economic theorists from Adam Smith to Friedrich Hayek legitimized capitalism as a system where workers needed to be paid well enough to provide for their families, be productive members of their communities, and have enough spending money left over to buy the products and services that companies like their employers were providing. In an essay that laid out her argument before Surveillance Capitalism was published, Zuboff cites economic historian Karl Polanyi for his views about how American companies after World War II were expected to offer a kind of communal reciprocity that involved hiring the available workers, hiking wages when possible, and sharing their prosperity rather than hoarding it. 

Polanyi knew that capitalism was never self-regulating, could be profoundly destructive, and that its foreseeable human tolls needed to be minimized. To do so, “measures and policies” also had to be integrated “into powerful institutions [that were] designed to check the action of the market relative to labor, land and money.” Zuboff cites Polanyi’s post-War study of General Motors not only for for the ways that fair labor practices, unionization and collective bargaining preserved “the organic reciprocities” between its workers and owners but also for how much the public appreciated these shared benefits at the time.

In the 1950s, for example, 80 percent of [American] adults said that ‘big business’ was a good thing for the country, 66 percent believed that business required little or no change, and 60 percent agreed, ‘the profits of large companies help make things better for everyone who buys their products or services.’

It was a balance that persisted for almost 40 years until what Zuboff calls “the ascendancy of neoliberalism” promoted an extreme form of capitalism where owner profits and share price were paramount and a responsible commitment to workers and communities no longer held capitalism’s worst tendencies in check. Around 1980, Oren Cass notes a related shift. Instead of creating worker satisfaction through “the dignity of work,” there was an economic policy shift from promoting worker satisfaction through the quality of their jobs to keeping them happy as consumers by giving them more stuff to buy with their paychecks. 
 
Zuboff argues that the surveillance capitalists stepped in once these established reciprocities were breached, with profound effects for individual Americans as workers and consumers, for communities whose vitality depends on them, and for our democratic way of life itself. 
 
Instead of paying for the parts of us that they’re profiting from, the surveillance capitalists pay us nothing for our behavioral data. Given the enormous size and profitability of companies like Facebook, Google and Amazon, they also “give back” far fewer jobs to the employment market than a GM once did. Moreover, these companies feel that they owe us nothing in exchange for manipulating us into buying whatever they’re selling—what Zuboff calls a kind of  “radical indifference.” Without so much as an afterthought, they take without giving much back to us individually, to the job market, or to the community at large. Capitalism’s ability to lift all boats was supposed to be a driving force for democracy and the genius of the American Dream.

The absence of organic reciprocities with people as sources of either consumers or employees is a matter of exceptional importance in light of the historical relationship between market capitalism and democracy. In fact, the origins of democracy in both Britain and America have been traced to these very reciprocities. [the citations I’ve omitted here are provided in her essay]

In The Age of Surveillance Capitalism, Zuboff describes the problem but doesn’t propose solutions. Cass, on the other hand, argues that capitalism remains the best hope for workers to reclaim their share of economic prosperity, but that we’ll have to change our public policies in order to restore the necessary reciprocities.  As for surveillance capitalism, tech futurist Jaron Lanier made an early argument for countering tech company indifference and reclaiming the benefit of our personal data in his 2013 book Who Owns the Future?  His proposals are even more feasible today.

The bricked-off memory of this old workroom seems more hopeful in the springtime.

Restoring the Balance

Cass’s Once and Future Worker is an important book because he backs up his ideological preferences with hard data. His solutions begin with the need for new government policies that aim to support thriving workers, families and communities by reinforcing the democratic give-and-take that is barely holding America together today. Along the way, Cass never loses sight of the real human impacts—for better and for worse—of economic forces and the policies that attempt to manage them.
 
For example, in his chapter “A Future for Work,” Cass argues that the workforce disruptions that will result from automation are a natural and positive effect of every innovation from the Industrial Revolution to the present. Learning how to do more with less is essential for economic growth. At the same time however, he argues strenuously that gains in economic productivity from new inventions and technologies (fewer workers producing the same amount) need to be matched by policy-driven gains in overall economic output (which will give displaced workers the ability to find new jobs as more wealth is created, living standards improve and consumer demand grows).

This is precisely what happened from 1947 to 1972, widely seen as the golden age of American manufacturing and the nation’s middle class. Economy-wide productivity increased by 99 percent; only fifty workers were needed by the end of the Vietnam War to do the work that one hundred could complete at the end of World War II. The result was not mass unemployment. Instead, America produced more stuff. The same share of the population was working in 1972 as in 1947, and men’s median income was 86 percent higher…[W]ith fewer workers required to produce the output of 1947, many could serve markets in 1972 that hadn’t existed a generation earlier or that had been much smaller.

Cass admits that these disruptions are hard for individual workers to weather but that expanding economic output always provides new jobs for displaced workers eventually. I’ve discussed the theory that at least some workers can prepare for disruptions like automation by developing skills “at the scalable edges” of their industries before their jobs disappear. But Cass also cites the introduction of ATM machines and fears about bank closures for an easier transition given the health of the economy at the time. In the years when ATM machines debuted, economic output (or an expanding economy) was matching productivity gains (and business profits). Since these ATMs lowered the banks’ cost of doing business, they repeatedly responded by opening more branches and creating new jobs.
 
Unfortunately, government statistics indicate that current productivity gains are not being matched by gains in overall economic output. It is a time when companies like Google, Facebook and Amazon are using their innovations to maximize corporate profits but provide relatively few jobs while exploiting free user data–giving back little (beyond convenience) that can enable workers, families and communities to thrive as well. So if you don’t feel like you’re “getting ahead” today, it’s not your imagination; the output economy that creates new economic opportunities and new jobs isn’t keeping up, and it hasn’t been doing so for years. Writes Cass:

From 1950 to 2000, while productivity in the manufacturing sector rose by 3.1 percent annually, value-added output grew by 3.6 percent—and employment increased, from 14 million to 17 million. During 2000-2016, productivity rose by a similar 3.3 percent annually. But output growth was only 1.1 percent—and employment fell, from 17 million to 12 million. Even with all of the technological advancement of the twenty-first century, had manufacturers continued to grow their businesses at the same rate as in the prior century, they would have needed more workers—a total of 18 million, by 2016 [if output had also been growing].

While he does not describe the problem in terms of “reciprocities” between workers, businesses and consumers like Zuboff, Cass would agree that the imbalances between them are at the heart of the problem and need to be corrected. Once again, several of the policy solutions he proposes are reviewed in my January post. All reject the failed economic policies of the Left and the Right in favor of new approaches that will help workers, families and communities to thrive even if we have to settle for making somewhat less money as an economy overall.
 
Long before Shoshana Zuboff was railing about “surveillance capitalism,” Jaron Lanier was arguing that our behavioral information has tremendous value to the tech platforms, marketers and sellers or what he calls the “Siren Servers” that are harvesting it, and that we should be putting a price tag on our personal data before they take any more of it for free. 
 
Like both Zuboff and Cass, Lanier believes in an economy that is sustained by a thriving middle class with plenty of hard, fulfilling work. His quandary is finding a way that more livelihoods can be sustained “in a world in which information is king,” as his Guardian book reviewer put it.

To that end, Lanier fears that in the early days of the internet we spent too much time worrying about open access and too little, if any time worrying about the digital economy’s likely impacts on job security and the monetizing of user information.  Lanier emphasizes the highly personal nature of this exploitation by arguing that our behavioral data “is people in disguise” and morally intertwined with the humans who supplied it.
 
Lanier’s corrective is to implement a system where we would each be given “nanopayments” for the use of our biometric property. In 2013, he envisioned more sophisticated archives to record where our data originates as well as what it should be worth. He takes over half of his book to describe this mechanism. For our purposes, what he envisioned five years ago can be reduced (although far too easily) to a series of blockchain-based payments for our provision of useful personal data, similar to the system discussed here in a post from last August. Lanier’s nanopayments to individuals whenever a company profits from their personal information would be daunting to implement but it would also go a long way towards restoring Zuboff’s “organic reciprocities” and bringing Cass’s broader economic growth into the business of surveillance capitalism.

+ + +

The mutual benefits that we once enjoyed as workers, consumers and business owners in exchange for what we were providing is no longer a reality. The reasons for that loss and the blame for those responsible are just the front-end of our thinking about what we’re prepared to do about it.
 
In the election cycles ahead of us, it is hard to believe that our nation will have the kind of reasoned debate that we need to be having about the future of our work and its impact on our families, our local communities and our way of life itself. But maybe, hopefully, a conversation along the lines I am arguing for above will begin alongside the shouting matches we are already having about the need to abandon democratic capitalism altogether.
 
Cass, Zuboff and Lanier all begin with the proposition—and it’s where I start too—that our future needs to be built by human workers and that the work we’ll be doing needs to enable us, our loved ones, our neighbors, our shared economy, and not merely a protected few, to flourish.  
 
We have managed to do this before.

Many of us have experienced its mutual benefit in our lifetimes, and we can experience it again.
 
But first, we’ll need to restore the social contract around our work.

This post was adapted from my April 21, 2019 newsletter. When you subscribe on this page, a new newsletter/post will be delivered to your inbox every Sunday morning. 

Filed Under: *All Posts, Building Your Values into Your Work, Work & Life Rewards Tagged With: America's social contract is broken, automation, capitalism, democratic capitalism, economic disruption from innovation, economic output, ethics, future of work, Jaron Lanier, Oren Cass, productivity, Shoshana Zuboff, social contract, surveillance capitalism, The Once and Future Worker, Who Owns the Future?, work-based priorities

Companies That Wear Their Values on Their Sleeves

March 31, 2019 By David Griesing Leave a Comment

We lead with our values because it’s easier to connect with somebody who shares our priorities, but it’s a trickier proposition for companies that want our loyalty because we have rarely been so divided about what is important to us. 

Deepening divisions over our common commitments make Apple’s roll-out of a suite of new services this week both riveting—and potentially fraught.

As the company’s newly announced services like AppleTV+ and AppleNews+ tie themselves more closely to Hollywood and the cover stories that sell glossy magazines, is Apple cloaking itself in values that could alienate or confuse many of the customers it aims to bond with more closely?

In 1997, on the eve of launching a new, national advertising campaign, Steve Jobs gave a short talk about the link between Apple’s values and its customers. “At our core, we believe in making the world a better place,” he said.  So our ads will “honor people who have changed the world for the better—[because] if they ever used a computer, it would be a Mac.”  In its marketing, Apple aligned itself with tech innovators like Thomas Edison, a genius who had already changed the world as Jobs and Apple were about to do.

A little more than 20 years later, with Apple following in the footsteps of Edison’s light-bulb company to industry dominance, the question is whether it would have a better chance of preserving that dominance by once again aligning itself with technology innovators who have already changed the world instead of those, like Steven Spielberg and Oprah Winfrey, who aim to do so by relying on their own approaches to social betterment? To set the stage for your possible reactions, here is a link, beginning with Spielberg and ending with Winfrey, that covers the highlights from Apple’s new services launch this past week in a vivid, all-over-the-place 15 minutes.

I should confess that I have a bit of a horse in this race because I want Apple to keep winning by bringing me world-class products and customer care, but I’m not sure that it can by pursuing services (like entertainment, news and games through the new AppleArcade) that put the company in lockstep with industries that muddy its focus and dilute its values proposition.

Instead of bringing me a global version of Oprah’s book club or more of Steven Speilberg’s progressive reminiscences, I was hoping to hear that Apple would be providing—even in stages over months or years—an elegantly conceived and designed piece of technology that would (1) allow me to cut the cable cord with my internet provider while (2) upgrading me to an interface where I could see and pay for whatever visual programming I choose whenever I want it. An ApplePay-enabled entertainment router. Now that would be the kind of tech innovation that would change my world for the better again (and maybe yours too) while staying true to its founder’s messaging from twenty-odd years ago.  

Tech commentator Christopher Mims talked this week about why Apple was claiming values (like those embedded in Oprah’s notion of social responsibility) to maintain its market share, but he never really debated whether it should do so. I’d argue that Apple should continue to make its own case for the social benefits of its tech solutions instead of mixing its message about priorities with the aspirations of our celebrity culture.

When it comes to Silicon Valley’s mostly hypocritcal claims about social responsibility, I start with the skepticism of observers like Anand Giridharadas. To him, Facebook, Google, Amazon and Apple all use values as tools to gain the profits they’re after, covering their self-serving agendas with feel-good marketing.

In a post last October, I discussed some of his observations about Facebook (and by implication, most of the others) in the context of his recent book, Winners Take All.

The problem, said Giridharadas, is that while these companies are always taking credit for the efficiencies and other benefits they have brought, they take no responsibility whatsoever for the harms… In their exercise of corporate social responsibility, there is a mismatch between the solutions that the tech entrepreneurs can and want to bring and the problems we have that need to be solved. “Tending to the public welfare is not an efficiency problem, The work of governing a society is tending to everybody. It’s figuring out universal rules and norms and programs that express the value of the whole and take care of the common welfare.” By contrast, the tech industry sees the world more narrowly. For example, the fake news controversy lead Facebook not to a comprehensive solution for providing reliable information but to what Giridharadas calls “the Trying-to-Solve-the-Problem-with-the-Tools-that-Caused-It” quandary.

In the face of judgments like his, I’d argue that Apple should be vetting its corporate messaging with the inside guidance of those who understand the power of values before it squanders the high ground it still holds. 
 
Beyond “sticking with the tech innovations that it’s good at” and the Edison-type analogies that add to their luster, what follows are three proposals for how the company might build on its values-based loyalty while still looking us in the eye when it does so.
 
Each one has Apple talking about what its tech-appreciating customers still care about most when they think—with healthy nostalgia—about all the things that Apple has done for them already.

The fate that the company is aiming to avoid

1.         Apple should keep reminding us about its unparalleled customer service

The unbelievable service I have come to expect from Apple feeds my brand loyalty. I feel that we share the value of trustworthiness. When someone relies on me for something, I stand behind it instead of telling them I don’t have the time or it’s too expensive to fix. For me, Apple has consistently done the same.
 
So I was surprised when I had to argue a bit harder than I thought was necessary for Apple’s battery fix for an older iPhone, and I started following other customer complaints against the company to see if a change of priorities was in the air. Since I’m writing to you on a MacBook Air, problems with the Air’s later generation keyboards have apparently escalated to the point that national class-action litigation is in the offing. Not unlike the iPhone battery fix, Apple has gone on record as being willing to replace any sticky keyboard for free within 4 years of purchase, but is it really as easy as it sounds? As recently as last week, there was this plea to Apple from a tech reviewer in a national newspaper after profiling a litany of customer difficulties:

For any Apple engineers and executives reading: This is the experience you’re providing customers who shell out $1200 or more—sometimes a lot more. This is the experience after THREE attempts at this keyboard design.

When you are one of the richest companies in history, relatively inexpensive problems like this need to be solved before they get this far. A reputation for world-class customer service is a terrible thing to waste. Be glad to fix your technology on those rare occasions when it breaks down, and solve the technology problem with these keyboards before you sell or replace any more of them. Don’t make customers who were loyal enough to pay a premium for an Apple device take you to court because they can’t get enough of your attention any other way. Caring for your customers is a core value that needs polishing before its shine begins to fade and your customer loyalty slips away.

2.         Apple should keep telling us how much it’s different from Google, Facebook and Amazon

The uses that the dominant tech platforms are making of our personal data are on everyone’s mind.
 
Beyond invasions of privacy, deep concerns are also being voiced about the impact of “surveillance capitalism” on Western democracy, not only because of meddling with our elections but, even more fundamentally, because of how this new economic model disrupts “the organic reciprocities involving employment and consumption” that undergird democratic market systems. These are profound and increasingly wide-spread concerns, and Apple for one seems to share them. 
 
This is from another post last October called “Looking Out For the Human Side of Technology”: 

I was also struck this week by Apple CEO Tim Cook’s explosive testimony at a privacy conference organized by the European Union…:
 
‘Our own information—from the everyday to the deeply personal—is being weaponized against us with military efficiency. Today, that trade has exploded into a data-industrial complex.
 
These scraps of data, each one harmless enough on its own, are carefully assembled, synthesized, traded, and sold. This is surveillance. And these stockpiles of personal data serve only to enrich the companies that collect them. This should make us very uncomfortable.
 
Technology is and must always be rooted in the faith people have in it. We also recognize not everyone sees it that way—in a way, the desire to put profits over privacy is nothing new.’
 
‘Weaponized’ technology delivered with ‘military efficiency.’ ‘A data-industrial complex.’ One of the benefits of competition is that rivals call you out, while directing unwanted attention away from themselves…so Cook’s (and Apple’s) motives here have more than a dollop of competitive self-interest where [companies like] Google and Facebook are concerned. On the other hand, Apple is properly credited with limiting the data it makes available to third parties and rendering the data it does provide anonymous. There is a bit more to the story, however.
 
If data privacy were as paramount to Apple as it sounded this week, it would be impossible to reconcile Apple’s receiving more than $5 billion a year from Google to make it the default search engine on all Apple devices.

In its star-studded launch of TV, News and Arcade services this week, Apple’s presenters always reiterated that none of these services would be “ad selling models” targeting Apple users. They’re good reminders about Apple’s values but…while $5B is a lot of revenue to forsake if new purchasers of Apple devices got to pick their own search engines, it’s also a significant amount of support for an antithetical business model.

Not selling my data to others for use against me, like Apple’s standing behind the functionality of my devices, are core contributors to the company’s good reputation in my mind, and never more so than today. If Apple continues to differentiate itself from its competitors on the use of our data—and I think that it should—it needs to find ways to be more forthright about its own conflicts of interest while doing so.

When you stand on your values and back it up with your actions, the competitors you are challenging will always seek to undermine you when your actions are inconsistent with your words. Why let that inconsistency be tomorrow’s headline, Tim? Why not be ready to talk more forthrightly about the quandary with Google, and how the company is trying to address it, when asked to comment before a “gotcha” story like this gets published?

3.         Apple should get ahead of its new services launch by proactively addressing likely problems with real consequences for the rest of us

In its roll-out presentation, Apple announced plans for a new service that will link players to games that Apple will be creating. Few tech areas have seen more startling impacts from the use of behavioral data that’s being gathered from players by those who are behind these on-line games. I recently talked here about how the programmers and monitors behind Fortnite and the updated Tetris games are using data about how players react to as many as 200 “persuasive design elements” in these games to enhance the intensity of the player experience while making it more addictive to boys and other susceptible populations. 

Apple’s engineers know about these issues already. Its programmers are making its new games ready for primetime as soon as next fall. To differentiate itself from others in the on-line gaming industry, to strike a more principled note than its competitors have, and to broaden the scope of Apple’s values when it comes to personal data, the company could tell us some or all of the following in the coming months:

-whether it will be using behavioral data it generates from players through real time play to make its games more absorbing or addictive;

-whether it intends to restrict certain classes of users (like pre-teen boys) from playing certain games or restrict the hours that they can play them;

-what other safeguards it will be implementing to limit the amount of “player attention” that these games will be capturing;

-whether it will be selling game-related merchandise in the Apple store so its financial incentives to encourage extensive game-playing are clear from the outset; and

-whether it will be using data about player behavior to encourage improved learning, collaborative problem-solving, community building and other so-called “pro-social” skills in any of the games it will be offering.

I have no reason to doubt that Apple is serious about protecting the user data that its devices and services generate. Its new venture into gaming provides an opportunity to build on Apple’s reputation for safeguarding the use of its customers’ information. Tim Cook and Apple need to be talking to the rest of us, both now and next fall, about how it will be applying its data-related values to problems its customers care about today in the brave new world of on-line gaming.

+ + +

Apple’s stated values will hold its current customers and attract new ones when there is “a match” between the company’s solutions and the problems the rest of us have that need solving. Affiliation and loyalty grow when there are shared priorities in communities, in politics and in the marketplace.

That means Apple should keep talking about what its tech-appreciating customers care about most in the light of the wonders that Apple has given us already. Despite its recently announced forays into entertainment, the company should never take its eye too far from what it does best—which is to make world-changing devices—even when they take more time to develop than short-term financial performance seems to demand. 

When a company knows what it is and acts accordingly, it can always take risks for the rewards that can come from wearing its values on its sleeves. 

This post was adapted from my March 31, 2019 newsletter. You can subscribe (to the right) and receive it in your inbox every Sunday morning.

Filed Under: *All Posts, Being Part of Something Bigger than Yourself, Building Your Values into Your Work Tagged With: Anand Giridharadas, Apple, behavioral data, Christopher Mims, core corporate values, corporate values, customer service, data, gaming, personal data use, priorities, Steve Jobs, surveillance capitalism, tech platforms, Tim Cook, values

About David

David Griesing (@worklifeward) writes from Philadelphia.

Read More →

Subscribe to my Newsletter

Join all the others who have new posts, recommendations and links to explore delivered to their inboxes every week. Please subscribe below.

David Griesing Twitter @worklifereward

My Forthcoming Book

WordLifeReward Book

Search this Site

Recent Posts

  • Liberating Trump’s Good Instincts From the Rest April 21, 2025
  • Delivering the American Dream More Reliably March 30, 2025
  • A Place That Looks Death in the Face, and Keeps Living March 1, 2025
  • Too Many Boys & Men Failing to Launch February 19, 2025
  • We Can Do Better Than Survive the Next Four Years January 24, 2025

Follow Me

David Griesing Twitter @worklifereward

Copyright © 2025 David Griesing. All Rights Reserved.

  • Terms of Use
  • Privacy Policy