David Griesing | Work Life Reward Author | Philadelphia

  • Blog
  • About
    • Biography
    • Teaching and Training
  • Book
    • WorkLifeReward
  • Newsletter Archive
  • Contact
You are here: Home / Blog

We’re All Acting Like Dogs Today

July 29, 2019 By David Griesing Leave a Comment

Saul Steinberg in the New Yorker, January 12, 1976

I recently read that dogs—through the imperatives of evolution—have developed expressions that invite positive interactions from their humans like “What a good boy” or a scratch behind the ears whenever, say, Wally looks me in the eye with a kind of urgency. There’s urgency all right, because he’s after something more than just my words or my touch. 
 
The reward that he really wants comes after this hoped-for interaction. It’s a little squirt of oxytocin, a hormone and neuropeptide that strengthens bonding by making me, and then both of us together, feel good about our connection.
 
As you might expect, Wally looks at me a lot when I’m working and I almost always respond. How could anyone refuse that face? Besides, it gives whatever workspace I’m in a positive charge that can linger all day.

Social media has also learned that “likes”—or almost any kind of interaction by other people (or machines) with our pictures and posts—produces similar oxytocin squirts in those who are doing the posting. We’re not just putting something out there; we’re after something that’s both measureable and satisfying in return.

Of course, the caution light flashes when social media users begin to crave bursts of chemical approval like Wally does, or “to feel rejected” when the “likes” aren’t coming fast enough. It’s a feedback loop “of craving to approval” that keeps us coming back for more. Will they like us at least as much, and maybe more than they did the last time I was here?  It’s the draw that always makes us stay on these social media platforms for longer than we want to and always keeps us coming back for more.

Social scientists have been telling us for years that craving approval for our contributions (along with not wanting to miss out) causes social media as well as cell-phone addiction in young people under 25. They are particularly susceptible to its lures because the pre-frontal cortex in their brains, the so-called seat of good judgment, is still developing. Of course, the ability to determine what’s good and bad for you is also underdeveloped in many older people too—I just never thought that included me.

So how I felt when I stopped my daily posting on Instagram three weeks ago came as a definite comeuppance. Until then I thought I had too much “good sense” to allow myself to be manipulated in these ways.

For the past 6 years, I’ve posted a photo on Instagram (or IG) almost every day. I told myself that regular picture-taking would make me look at the world more closely while, at the same time, making me better at capturing what I saw. It would give me a cache of visual memories about where I’d been and what I’d been doing, and posting on IG gave me a chance to share them with others.

In recent years, I’d regularly get around 50 “likes” for each photo along with upbeat comments from strangers in Yemen, Moscow and Beruit as well as from people I actually know. The volume and reach of approval wasn’t great by Rhianna standards, but as much as half of it would always come in the first few minutes after posting every day. I’d generally upload my images before getting out of bed in the morning, so for years now I’ve been starting my days with a series of “feel good” oxytocin bursts.

Of course, you know what happened next. My “cold turkey” from Instagram produced symptoms that felt exactly like withdrawal. It recalled the aftermath of cutting back on carbs a few years back or, after I was in the Coast Guard, nicotine. Noticeable. Physical. In the days that followed, I’d find myself repeatedly gazing over at my phone screen for notifications of likes or comments that were no longer coming. Or even worse, I’d explore identical-looking notifications for me to check other people’s pictures and stories, lures that felt like reminders of the boosts I was no longer getting. I felt “cut off” from something that had seemed both alive and necessary.

It’s one thing to read about social media or cell-phone addiction and accept it’s downsides as a mental exercise, quite another to feel withdrawal symptoms after quitting one of them.

Unlike the Food & Drug Administration, I did’t need anything more than my own clinical trial to tell me about the forces that were at play here, because at the same time that IG owner Mark Zuckerberg is engineering what feels like my addiction to his platform, he is also targeting me with ads for things (that I’m sorry to say) I realized I was wanting much more frequently. That’s because Instagram was learning all along what I was interested in whenever I hovered over one of its ads or followed an enticing link.

In other words, I’d been addicted to soften me up for buying stuff that IG had learned I’m likely to want in a retail exchange that effectively made both IG and Mark Zuckerberg the middleman in every sale. IG’s oxcytocin machine had turned me into a captive audience who’d been intentionally rendered susceptible to buying whatever IG was hawking. 

That seems both manipulative and underhanded to me.

It’s one thing to write about “loss of autonomy” to the on-line tech giants, it is another to have felt a measure of that loss.

So where does this leave me, or any of us?

How do lawmakers and regulators limit (or prevent) subtle but nonetheless real chemical dependency when it’s induced by a tech platform?

Is breaking the ad-based business models that turn so many of us into captive buyers even possible in a market system that has used advertising to stoke sales for more than 200 years? Can our consumer-oriented economy turn its back on what may be the most effective sales model ever invented?

To think that we are grappling with either of these questions today would be an illusion.

The U.S. Federal Trade Commission has just fined Facebook (which is IG’s owner) for failing to implement and enforce narrow privacy policies that it had promised to implement and enforce years ago. The FTC also mandated oversight of Zuckerberg personally. Unlike the CEOs of other public companies, because he has effective ownership control of Facebook, his board of directors can’t really hold his feet to the fire. But neither the fine nor this new oversight mechanism challenge the company’s underlying business model, which is to (1) induce an oxytocin dependency in its users; (2) gather their personal data while they are feeling good by satisfying their cravings; (3) sell their personal data to advertisers; and (4) profit from the ads that are aimed at users who either don’t know or don’t care that they are being seduced in this way.

Recently announced antitrust investigations are also aimed at different problems. The Justice Department, FTC and Congress will be questioning the size of companies like Facebook and their dominance among competitors. One remedy might break Facebook into smaller pieces (like undoing it’s 2012 purchase of Instagram). However, these investigations are not about challenging a business model that induces dependency in its users, eavesdrops on their personal behavior both on-site and off of it, and then turns them into consumers of the products on its shelves. The best that can be hoped for is that some of these dominant platforms may be cut down to size and have some of their anti-competitive practices curtailed.  

Even the data-privacy initiatives that some are proposing are unlikely to change this business model. Their most likely result is that users who want to restrict access to, and use of, their personal information will have to pay for the privilege of utilizing Facebook or Google or migrate to new privacy-protecting platforms that will be coming on-line. I profiled one of them, called Solid, on this page a few weeks back.

Since it looks like we’ll be stuck in this brave new world for awhile, why does it matter that we’re being misused in this way?

Personal behavior has always been influenced by whatever “the Jones” were buying or doing next door (if you were desperate enough to keep up with them). In high school you changed what you were wearing or who you were hanging out with if you wanted to be seen as one of the cool kids.  Realizing that your hero, James Bond, is wearing an Omega watch might make you want to buy one too. But the influence to buy or to imitate that I’m describing here with Instagram feels new, different and more invasive, like we’ve entered the realm of science fiction.

Social media companies like Facebook and Instagram are using psychological power, that we’ve more or less given them, to remove some of the freedom in our choices so that they, in turn, can make Midas kingdoms of money off of us. And perhaps their best trick of all is that you only feel the ache of dependency that kept you in their rabbit holes—and how they conditioned you to respond once you were in them—after you decide to leave.

Saul Steinberg in the New Yorker, November 16, 1968

Maybe the scariest part of this was my knowing better, but acquiescing anyway, for all of those years. 
 
It’s particularly alarming given my belief that autonomy (along with generosity) are the most important qualities that I have.
 
I guess I had to feel what had happened to me in order to understand the subtlety of my addiction, the loss of freedom that my cravings for connection had induced, and my susceptibility to being used, against my will, by strangers for their own, very different purposes.
 
By delivering “warm and fuzzies” every day and getting me to stay for their commercials, Instagram became my small experience of mind control and Big Brother.
 
Over the past few weeks, I see people looking for something in their phones and think differently about what they’re doing. That’s because I still feel some of the need for what they may be looking for too.
 
It gives a whole new meaning to “the dog days” this summer.

+ + +

I’d love to hear from you if you’ve had a similar experience with a social network like Facebook or Instagram. If we don’t end up talking before then, I’ll see you next week.

This post was adapted from my July 28, 2019 newsletter. When you subscribe, a new newsletter/post will be delivered to your inbox every Sunday morning.

Filed Under: *All Posts, Building Your Values into Your Work, Daily Preparation, Using Humor Effectively, Work & Life Rewards Tagged With: addiction and withdrawal, addiction to social media, Big Brother, dog days, facebook, Instagram, manipulation, mind control, oxytocin, prevention, regulation, safeguards, Saul Steinberg, seat of good judgment, social media, social networks

Communities Rise From the Wreckage

July 22, 2019 By David Griesing Leave a Comment

J. M. W. Turner, “Snow Storm – Steam-Boat off a Harbour’s Mouth” (1805)

Some days it’s good to be reminded.
 
I saw the aftermath of a terrible car and motorcycle accident a few days ago, and couldn’t help being caught in its blast radius because its impact reverberated almost to my doorstep.
 
From the epicenter, I heard the wailing of civilian rescuers huddled over what was surely the rider, the motorcycle he’d been on strewn in pieces a few feet away. Several cars had stopped already and knots of onlookers were clustered at the intersection’s nearest corners. 
 
Traffic had backed up for much of the very long block and was throbbing to break through. One neighbor or pedestrian wearing a bright green shirt took to the center of the street, not embodying “Go” but shouting just the opposite: “You’ll have to turn around,” as one car defiantly entered the empty, opposing lane to push through his impatience. “Really,” the Green Man countered, “are you in such a hurry that you’re willing to risk more injuries?”  
 
At their confrontation I thought of going back inside, but feeling his protectiveness too I strained for a look at the aura of assistance that was closer in than this spontaneous traffic monitor who was bravely putting himself between his own safety and more cars that were feinting to get through. 
 
Just then, an equally improvised town crier–perhaps sensing the ambivalence of our sympathies– shouted: “It was an illegal turn, the motorcycle was not at fault” because she too may have been assuming that it was. In the murmuring that followed, it also became clear that the illegal turner had fled the scene, which made the witnesses and passers-by seem to move even closer in, as if to shelter the body that had been left alone in the middle of two city streets.  Surely, it wasn’t just moth-to-flame interest that held us here. I tried to gather my vaguer explanations before another driver tried to power through the threads and associations or the sirens arrived.
 
They ended up converging on what Fred Rogers had said one day to a kid who regularly visited his Neighborhood:

When I was a boy and I would see scary things in the news, my mother would say to me, ‘Look for the helpers. You will always find people who are helping.’

Some years ago I’d been in the middle of a different accident when I hit a dog who’d run into the road and a whirlwind of help, rubbernecking and road rage had spun around me as I cradled that dog in the middle of an even crazier intersection. It added up to one of my worst and best days for many of these same reasons. I can still feel strangers hovering over me, trying to hold back the traffic like a gathering storm, helping.
 
People help because it allows them to draw on their ability to act–that is, to take matters into their own capable hands–before “experts,”  like the police, the ambulance crews, the tow truck drivers who have been on the lookout for wreckage, show up. It’s also acting on the common bond they feel with the fallen, maybe remembering when a stranger had helped them or sensing an as-yet unrealized potential to intervene in the same way themselves.
 
During days like today when selfish and mean can seem front and center, there’s always hope to be found in the helpers. I, for one, never trust that they’ll come, but they still, always seem to. It’s the surprise of grace. And they were there again this week, gathering around a body that had been hit and broken before it was abandoned. 
 
Fortunately, fatefully, these expressions of shared humanity are everywhere when we look for them, from the most extreme circumstances to the most mundane.  Writer Rebecca Solnit described “improvisational communities of help” around earthquakes like the tremor that destroyed much of San Francisco more than a century ago, hurricanes like Katrina that ravaged New Orleans, and the terrorist attack on 9/11 in New York City. A half a year ago, I wrote about a helping community that materialized in a Walmart parking lot after a terrible fire had nearly obliterated a place that could no longer be called Paradise California without shaking your head.
 
In a new article, Yale sociologist and physician Nicholas Christakis has created “a record for analysis” out of the information that still exists about survivors of shipwrecks over a period of 400 years (from 1500 to 1900), drawing tentative conclusions about their post-wreckage collaborations and potentially opening up new ways of assembling “data sources” for testing by social scientists.
 
In today’s post, there is more on Solnit’s observations about human nature and Christakis’ thoughts about cooperative behavior after tragedy.

Tracking on Christakis’ research, the pictures here are of turbulent seas and the inevitable shipwrecks, all by perhaps England’s greatest painter, J.M.W. Turner. Each one invites us to imagine what comes next and to be continuously surprised by how good that can be.

J. M. W. Turner, “Shipwreck Off Hastings” (1825)

 1.            Spontaneous Helping Communities

Rebecca Solnit’s A Paradise Built in Hell: the Extraordinary Communities That Arise in Disaster alerted me to how average people became rescuers during several of the worst catastrophes in American history. She explains it, in part, by how “diving in to help” brings a sense of confidence and liberation that is lacking in people’s private lives. For these civilian rescuers, it’s almost experienced as enjoyment:

…if enjoyment is the right word for that sense of immersion in the moment and solidarity with others caused by the rupture in everyday life, an emotion graver than happiness but deeply positive.  We don’t even have a language for that emotion, in which the wonderful comes wrapped in the terrible, joy in sorrow, courage in fear. We cannot welcome disaster, but we can value the responses, both practical and psychological….The desires and possibilities awakened are so powerful they shine even from wreckage, carnage, and ashes.

It’s as if we can see better versions of ourselves as leaders, problem-solvers, caring adults and members of a flesh-and-blood communities shining through. 
 
Speaking with any assurance “about life today” is always risky, but it does seem that we exalt “minding your own business” as an excuse today for not getting more involved while building as much insulation as we can afford between our private and public lives. It may explain low voter turn-out, general political apathy and cynicism, our involvement with arms-length communities (like Facebook) instead of real ones where we have to get our hands dirty and look our neighbors in the eye, and the time we spend in echo-chambers that reinforce our sense of “us vs. them”  But at exactly the same time that our private lives seem paramount, Solnit’s argument is that we also crave more meaningful engagement than we’ll ever find living behind safety glass. 
 
It is this longing that has a chance to be satisfied when regular people find themselves helping during a car accident or other emergency. We suddenly feel more fully alive than we felt before. Solnit analogizes the fullness that regular people feel under these circumstances to the solidarity and immediacy that soldiers often experience during wartime.

We have, most of us, a deep desire for this democratic public life, for a voice, for membership, for purpose and meaning that cannot be only personal.  We want larger selves and a larger world. It is part of the seduction of war William James warned against—for life during wartime often serves to bring people into this sense of common cause, sacrifice, absorption in something larger.  Chris Hedges inveighed against it too, in his book War Is a Force That Gives Us Meaning: ‘The enduring attraction of war is this: Even with its destruction and carnage it can give us what we long for in life. It can give us purpose, meaning, a reason for living. Only when we are in the midst of conflict does the shallowness and vapidity of our lives become apparent.  Trivia dominates our conversations and increasingly our airwaves.  And war is an enticing elixir.  It gives us resolve, a cause.  It allows us to be noble.’  Which only brings us back to James’s question:  What is the moral equivalent of war—not the equivalent of its carnage, its xenophobias, its savagery—but its urgency, its meaning, its solidarity?

The clutch of men and women kneeling over and attending to the victim sprawled on the intersection near my front door were finding it. 
 
It’s why the Green Man turned himself into a traffic cop right before my eyes. He was finding something that he needed too.

J. M. W. Turner, “Long Ship’s Lighthouse, Lands End” (1834-5)

2.            How Shipwrecked Survivors Came Together Time After Time

In the course of his research about how people behave in social networks, Nicholas Christokis ran an experiment using data he gathered about shipwrecks that took place over the span of four hundred years. He wanted to know how survivors who had “narrowly escaped death and were psychologically traumatized,” often arriving on remote islands “nearly drowned and sometimes naked and wounded” came together (or broke down) as a network of survivors. His findings tended to prove his theory that we carry “innate proclivities to make good societies” even under the most extreme circumstances. 
 
His “Lessons from Shipwrecked Micro-Societies” appeared in the on-line platform Quillette a little over a week ago. Christakis acknowledged many of his experiment’s limitations up front:

The people who traveled on ships were not randomly drawn from the human population; they were often serving in the navy or the marines or were enslaved persons, convicts or traders. Shipboard life involved exacting status divisions and command structures to which these people were accustomed. Survivor groups were therefore made up of people who not only frequently came from a single distinctive cultural background (Dutch, Portuguese, English and so on), but who were also part of the various subcultures associated with long ocean voyages during the epoch of exploration. These shipwreck societies were [also]…mostly male.

Still, given the similarities and differences among these survivor groups in terms of race, gender and hierarchy, it is noteworthy that they rarely devolved into a state of selfishness, brutality or violence in their quest to survive. Instead, they tended to model fairness and cooperation in their interactions, a reduction in previous status divisions, noteworthy demonstrations of leadership and the development of new friendships.

Survivor communities manifested cooperation in diverse ways: sharing food equitably; taking care of injured or sick colleagues; working together to dig wells, bury the dead, co-ordinate a defense, or maintain signal fires; or jointly planning to build a boat or secure rescue. In addition to historical documentation of such egalitarian behaviors, archaeological evidence includes the non-separation of subgroups (for example, officers and enlisted men or passengers and servants) into different dwellings, and the presence of collectively built wells or stone signal-fire platforms. Other indirect evidence is found in the accounts of survivors, such as reports of the crew being persuaded, because of good leadership, to engage in dangerous salvage operations. And we have many hints of friendship and camaraderie in these circumstances.

Christakis is best known for demonstrating how networks of strangers can promote positive behaviors and even altruism through “the contagion” that their influence exerts in the course of their interactions. His new book Blueprint: the Evolutionary Origins of a Good Society makes the additional argument that our genes affect not only our personal behaviors but also provide the drive to join together “to make good societies” whether they are in on-line networks or in the communities where we live and work. The encouraging data from shipwreck communities that Christakis summarized in his article is part of that argument. 
 
In what can seem like a mean-spirited and selfish time, there is hope to be found in the circumstantial evidence that “helping one another” may be hardwired into our genetic programming. 
 
There is hope to be found every time that regular people pitch in to help instead of walking by or refusing to get involved, not because they’re heroic or brave but because they experience something akin to enjoyment and even liberation by doing so.
 
Whenever hope in the future seems to be flagging, look for the helpers. There are several reasons that they’re always around.

This post was adapted from my July 21, 2019 newsletter. When you subscribe, a new newsletter/post will be delivered to your inbox every Sunday morning.

 

Filed Under: *All Posts, Being Part of Something Bigger than Yourself, Work & Life Rewards Tagged With: community, community building, helping communities, improvised roles, Nicholas Christakis, people helping, Rebecca Solnit, spontaneous helping communities

A Course Correction for the World Wide Web

July 15, 2019 By David Griesing Leave a Comment

Pink shock and emerald green in the back yard

Emily was here for breakfast on Thursday and I had the morning’s news on public radio—the same stories staring at me from the front page of my newspaper—and she said with millennial weariness: Why are you listening to that?
 
It was a good question, and one I often answer for myself by turning it off because it’s mostly journalist shock, outrage or shame about whatever the newsmakers think is going on. Who needs their sense of urgency in those first moments when you’re still trying to figure out whether you’re fully conscious or even alive?
 
On the other hand, short ventures into my yard quickly provide more hopeful messages. It’s the early summer flush, fueled by plenty of rain, and everything is still emerald green. Summer is telling different stories than the radio, sees different horizons, including the one some kind of watermelon sprawl is trying to reach with its tentacles. These co-venturers aren’t fretting about the future, they’re claiming it by inches and feet, or celebrating it with explosions in the air.
 
While shock, outrage or shame can push you to do good work, it’s hope that sustains it by giving it directions, goals, and better horizons. Everything around the creeping reality of surveillance capitalism tiggers all those negative feelings and keeps me snapping at its purveyors with my canines because—well—because it deserves to be pierced and wounded.
 
But then what?
 
That’s where others who have shared these angry and disgusted reactions start showing me more hopeful responses in their own good work–the productive places where gut reaction sometimes enable you to go–and that my radio provides little if any of (ok, so now what?) on most mornings. 

In the early days of the internet, the geeks and tinkerers in their basements and garages had utopian dreams for this new way of communicating with one another and sharing information. In the thirty-odd-years that have followed, many of those creative possibilities have been squandered. What we’ve gotten instead are dominant platforms that are fueled by their sale of our personal data. They have colonized and monetized the internet not to share its wealth but to hoard whatever they can take for themselves.
 
One would be right in thinking that many of the internet’s inventors are horrified by these developments, that some of them have expressed their shock, outrage and shame, and that a few have ridden these emotions into a drive to find better ways to utilize this world-changing technology. Perhaps first among them is Tim Berners-Lee.

Like some of my backyard’s denizens, he’s never lost sight of the horizons that he saw when he first poked his head above the ground. He also feels responsible for helping to set right what others have gotten so woefully wrong after he made his first breathtaking gift to us thirty years ago.

Angel trumpets

1.         The Inventor of the Internet

At one point the joke was that Al Gore had invented the internet, but, in fact, it was Tim Berners-Lee. It’s been three decades since he gathered the critical components, linked them together, and called his creation “the world wide web.” Today however, he’s profoundly disconcerted by several of the directions that his creation has taken and he aims to do something about it.
 
In 1989, Berners-Lee didn’t sell his original web architecture and the protocols he assembled or attempt to get rich from them. He didn’t think anyone should own the internet, so no patents were ever gotten or royalties sought. The operating standards, developed by a consortium of companies he convened, were also made available to everyone, without cost, so the world wide web could be rapidly adopted. In 2014, the British Council asked prominent scientists, academics, writers and world leaders to chose the cultural moments that had shaped the world most profoundly in the previous 80 years, and they ranked the invention of the World Wide Web number one. This is how they described Berners-Lee’s invention:

The fastest growing communications medium of all time, the internet has changed the shape of modern life forever. We can connect with each other instantly, all over the world.

Because he gave it away with every good intention, perhaps Berners-Lee has more reasons than anyone to be concerned about the poor use that others have made of it. Instead of remaining the de-centralized communication and information sharing platform he envisioned, the internet still isn’t available everywhere, has frequently been weaponized, and is increasingly controlled by a few dominant platforms for their own private gain. But he’s also convinced that these ill winds can be reversed.
 
He reads and shares an open letter every year on the anniversary of the internet’s creation. His March 2018 and March 2019 letters lay out his primary concerns today. 
 
Last year, Berners-Lee renewed his commitment “to making sure the web is a free, open, creative space – for everyone. That vision is only possible if we get everyone online, and make sure the web works for people [instead of against them].” After making proposals that aim to expand internet access for the poor (and for poor women and girls in particular), he discusses various ways that the web has failed to work “for us.”

What was once a rich selection of blogs and websites has been compressed under the powerful weight of a few dominant platforms. This concentration of power creates a new set of gatekeepers, allowing a handful of platforms to control which ideas and opinions are seen and shared….the fact that power is concentrated among so few companies has made it possible to weaponise the web at scale. In recent years, we’ve seen conspiracy theories trend on social media platforms, fake Twitter and Facebook accounts stoke social tensions, external actors interfere in elections, and criminals steal troves of personal data.

Additionally troubling is the fact that we’ve left these same companies to police themselves, something they can never do effectively given their incentives to maximize profits instead of social goods. “A legal or regulatory framework that accounts for social objectives may help ease those tensions,” he says.
 
Berners-Lee sees a similar misalignment of incentives between the tech giants and the users they have herded into their platforms.

Two myths currently limit our collective imagination: the myth that advertising is the only possible business model for online companies, and the myth that it’s too late to change the way platforms operate. On both points, we need to be a little more creative.
 
While the problems facing the web are complex and large, I think we should see them as bugs: problems with existing code and software systems that have been created by people – and can be fixed by people. Create a new set of incentives and changes in the code will follow. …Today, I want to challenge us all to have greater ambitions for the web. I want the web to reflect our hopes and fulfill our dreams, rather than magnify our fears and deepen our divisions.
 
As the late internet activist, John Perry Barlow, once said: “A good way to invent the future is to predict it.” It may sound utopian, it may sound impossible to achieve… but I want us to imagine that future and build it.

In March, 2018, most of us didn’t know what Berners-Lee had in mind when he talked about building.
 
This year’s letter mostly elaborated on last year’s themes. In addition to governments “translating laws and regulations for the digital age,” he calls on the tech companies to be a constructive part of the societal conversation (while never mentioning the positive role that their teams of Washington lobbyists might play). In other words, it’s more of a plea or attempt to shame them into action since their profits instead of their public interest remain their primary motivators. It is also unclear what he expects from government leaders and regulators as politics becomes more polarized, but he is plainly calling on the web’s theorizers, inventors and commentators and on its billions of users to pitch in and help. 
 
Berners-Lee proposes a new Contract for the Web, a global collaboration that was launched in Lisbon last November. His Web Summit brought together those:

who agree we need to establish clear norms, laws and standards that underpin the web. Those who support it endorse its starting principles and together we are working out the specific commitments in each area. No one group should do this alone, and all input will be appreciated. Governments, companies and citizens are all contributing, and we aim to have a result later this year.

It’s like the founding spiritual leader convening the increasingly divergent members of his flock before setting out on the next leg of the journey.

The web is for everyone, and collectively we hold the power to change it. It won’t be easy. But if we dream a little and work a lot, we can get the web we want.

In the meantime however, while a new Contract for the Web is clearly necessary, it is not where Berners-Lee is pinning all of his hopes.

The seed came from somewhere and now it’s (maybe) making watermelons

2.         An App for an App

The way that the internet was created, any webpage should be accessible from any device that has a web browser, including a smart phone, a personal computer or even an internet-enabled refrigerator. That kind of free access is blocked, however, when the content or the services are locked inside an app and the app distributor (such as Google or Facebook) controls where and how users interact with “what’s inside.” As noted recently in the Guardian: “the rise of the app economy fundamentally bypasses the web, and all the principles associated with it, of openness, interoperability and ease of access.”
 
On the other hand, perhaps the web’s greatest strength has been the ability of almost anyone to build almost anything on top of it. Since Berners-Lee built the web’s foundation and its first couple of floors, he’s well-positioned to build an alternative that provides the openness, interoperability and ease of access that has been lost while also serving the public’s interest in principles like personal data privacy. At the same time that he has been sponsoring a global quest for new standards to govern the internet, Berner-Lee has also been building an alternative infrastructure on top of the internet’s common foundation.
 
One irony is that he’s building it with a new kind of app.
 
Last September, Berners-Lee announced a new, open-source web-based infrastructure called Solid that he has been working on quietly with colleagues at MIT for several years. “Open-source” means that once the rudimentary structures are made public, anyone can contribute to that infrastructure’s web-based applications. Making the original internet free and widely available lead to its rapid adoption and Berners-Lee is plainly hoping that “open source” will have the same impact on Solid. Shortly after his announcement, an article in Tech Crunch reported that open-source developers were already pouring into the Solid platform “in droves.” As Fast Company reported at the time: Berner-Lee’s objective for Solid, and the company behind it called Inrupt, was “to turbocharge a broader movement afoot, among developers around the world, to decentralize the web and take back power from the forces that have profited from centralizing it.”  Like a second great awakening.
 
First and foremost, the Solid web infrastructure is intended to give people back control of their personal data on-line. Every data point that’s created in or added to a Solid software application exists in a Solid “pod,” which is an acronym for “personal on-line data store” that can be kept on Solid’s server or anywhere else that a user chooses. Berners-Lee previewed one of the first Solid apps for the Fast Company reporter after his new platform was announced:

On his screen, there is a simple-looking web page with tabs across the top: Tim’s to-do list, his calendar, chats, address book. He built this app–one of the first on Solid–for his personal use. It is simple, spare. In fact, it’s so plain that, at first glance, it’s hard to see its significance. But to Berners-Lee, this is where the revolution begins. The app, using Solid’s decentralized technology, allows Berners-Lee to access all of his data seamlessly–his calendar, his music library, videos, chat, research. It’s like a mashup of Google Drive, Microsoft Outlook, Slack, Spotify, and WhatsAp.

The difference is that his (or your) personal information is secured within a Solid pod from others who might seek to make use of it in some way.
 
Inrupt is the start-up company that Berners-Lee and John Bruce launched to drive development of Solid, secure the necessary funding and transform Solid from a radical idea into a viable platform for businesses and individuals. According to Tech Crunch, Inrupt is already gearing up to work on a new digital assistant called Charlie that it describes as “a decentralized version of Alexa.”
 
What will success look like for Inrupt and Solid? A Wired magazine story last February described it this way:

Bruce and Berners-Lee aren’t waiting for the current generation of tech giants to switch to an open and decentralised model; Amazon and Facebook are unlikely to ever give up their user data caches. But they hope their alternative model will be adopted by an increasingly privacy-aware population of web users and the organisations that wish to cater to them. ‘In the web as we envision it, entirely new businesses, ecosystems and opportunities will emerge and thrive, including hosting companies, application providers, enterprise consultants, designers and developers,’ Bruce says. ‘Everyday web users will find incredible value in new kinds of apps that are impossible on today’s web.

In other words, if we dream a little and work a lot, we can get the web that we want. 

+ + + 

At this stage in his life (Berners-Lee is 64) and given his world-bending accomplishments, he could have retired to a beach or mountaintop somewhere to rest on his laurels, but he hasn’t. Instead, because he can, he heeds the call of his discomfort and is diving back in to champion his original vision. It’s the capability and commitment, hope and action that are the arc of all good work.

Telling him that Solid is a pipe-dream would be like telling my backyard encouragers to stop shouting, trumpeting and fruiting.

This post was adapted from my July 14, 2019 newsletter. When you subscribe, a new newsletter/post will be delivered to your inbox every Sunday morning.

Filed Under: *All Posts, Being Part of Something Bigger than Yourself, Building Your Values into Your Work, Entrepreneurship, Heroes & Other Role Models, Work & Life Rewards Tagged With: acting on hopes, Contract for the Web, data privacy, entrepreneurship, Inrupt, misalignment of incentives, personal online data store, Solid, Tim Berners-Lee

Citizens Will Decide What’s Important in Smart Cities

July 8, 2019 By David Griesing Leave a Comment

The norms that dictate the acceptable use of artificial intelligence in technology are in flux. That’s partly because the AI-enabled, personal data gathering by companies like Google, Facebook and Amazon has caused a spirited debate about the right of privacy that individuals have over their personal information. With your “behavioral” data, the tech giants can target you with specific products, influence your political views, manipulate you into spending more time on their platforms, and weaken the control that you have over your own decision-making.
 
In most of the debate about the harms of these platforms thus far, our privacy rights have been poorly understood.  In fact, our anything-but-clear commitments to the integrity of our personal information have enabled these tech giants to overwhelm our initial, instinctive caution as they seduced us into believing that “free” searches, social networks or next day deliveries might be worth giving them our personal data in return. Moreover, what alternatives did we have to the exchange they were offering?

  • Where were the privacy-protecting search engines, social networks and on-line shopping hubs?
  • Moreover, once we got hooked on to these data-sucking platforms, wasn’t it already too late to “put the ketchup back in the bottle” where our private information was concerned? Don’t these companies (and the data brokers that enrich them) already have everything that they need to know about us?

Overwhelmed by the draw of  “free” services from these tech giants, we never bothered to define the scope of the privacy rights that we relinquished when we accepted their “terms of service.”  Now, several years into this brave new world of surveillance and manipulation, many feel that it’s already too late to do anything, and even if it weren’t, we are hardly willing to relinquish the advantages of these platforms when they are unavailable elsewhere. 
 
So is there really “no way out”?  
 
A rising crescendo of voices is gradually finding a way, and they are coming at it from several different directions.
 
In places like Toronto (London, Helsinki, Chicago and Barcelona) policy makers and citizens alike are defining the norms around personal data privacy at the same time that they’re grappling with the potential fallout of similar data-tracking, analyzing and decision-making technologies in smart-city initiatives.
 
Our first stop today is to eavesdrop on how these cities are grappling with both the advantages and harms of smart-city technologies, and how we’re all learning—from the host of scenarios they’re considering—why it makes sense to shield our personal data from those who seek to profit from it.  The rising debate around smart-city initiatives is giving us new perspectives on how surveillance-based technologies are likely to impact our daily lives and work. As the risks to our privacy are played out in new, easy-to-imagine contexts, more of us will become more willing to protect our personal information from those who could turn it against us in the future.
 
How and why norms change (and even explode) during civic conversations like this is a topic that Cass Sunstein explores in his new book How Change Happens. Sunstein considers the personal impacts when norms involving issues like data privacy are in flux, and the role that understanding other people’s priorities always seems to play. Some of his conclusions are also discussed below. As “dataveillance” is increasingly challenged and we contextualize our privacy interests even further, the smart-city debate is likely to usher in a more durable norm regarding data privacy while, at the same time, allowing us to realize the benefits of AI-driven technologies that can improve urban efficiency, convenience and quality of life.
 
With the growing certainty that our personal privacy rights are worth protecting, it is perhaps no coincidence that there are new companies on the horizon that promise to provide access to the on-line services we’ve come to expect without our having to pay an unacceptable price for them.  Next week, I’ll be sharing perhaps the most promising of these new business models with you as we begin to imagine a future that safeguards instead of exploits our personal information. 

1.         Smart-City Debates Are Telling Us Why Our Personal Data Needs Protecting

Over the past 6 months, I’ve talked repeatedly about smart-city technologies and one of you reached out to me this week wondering:  “What (exactly) are these new “technologies”?”  (Thanks for your question, George!).  
 
As a general matter, smart-city technologies gather and analyze information about how a city functions, while improving urban decision-making around that new information. Throughout, these data-gathering,  analyzing, and decision-making processes rely on artificial intelligence. In his recent article “What Would It Take to Help Cities Innovate Responsibly With AI?” Eddie Copeland begins by describing the many useful things that AI enables us to do in this context: 

AI can codify [a] best practice and roll it out at scale, remove human bias, enable evidence-based decision making in the field, spot patterns that humans can’t see, optimise systems too complex for humans to model, quickly digest and interpret vast quantities of data and automate demanding cognitive activities.

In other words, in a broad range of urban contexts, a smart-city system with AI capabilities can make progressively better decisions about nearly every aspect of a city’s operations by gaining an increasingly refined understanding of how its citizens use the city and are, in turn, served by its managers.
 
Of course, the potential benefits of greater or more equitable access to city services as well as their optimized delivery are enormous. Despite some of the current hew and cry, a smart-cities future does not have to resemble Big Brother. Instead, it could liberate time and money that’s currently being wasted, permitting their reinvestment into areas that produce a wider variety of benefits to citizens at every level of government.
 
Over the past weeks and months, I’ve been extolling the optimism that drove Toronto to launch its smart-cities initiative called Quayside and how its debate has entered a stormy patch more recently. Amidst the finger pointing among Google affiliate Sidewalk Labs, government leaders and civil rights advocates, Sidewalk (which is providing the AI-driven tech interface) has consistently stated that no citizen-specific data it collects will be sold, but the devil (as they say) remains in the as-yet to be disclosed details. This is from a statement the company issued in April:

Sidewalk Labs is strongly committed to the protection and privacy of urban data. In fact, we’ve been clear in our belief that decisions about the collection and use of urban data should be up to an independent data trust, which we are proposing for the Quayside project. This organization would be run by an independent third party in partnership with the government and ensure urban data is only used in ways that benefit the community, protect privacy, and spur innovation and investment. This independent body would have full oversight over Quayside. Sidewalk Labs fully supports a robust and healthy discussion regarding privacy, data ownership, and governance. But this debate must be rooted in fact, not fiction and fear-mongering.

As a result of experiences like Toronto’s (and many others, where a new technology is introduced to unsuspecting users), I argued in last week’s post for longer “public ventilation periods” to understand the risks as well as rewards before potentially transformative products are launched and actually used by the public.
 
In the meantime, other cities have also been engaging their citizens in just this kind of information-sharing and debate. Last week, a piece in the New York Times elaborated on citizen-oriented initiatives in Chicago and Barcelona after noting that:

[t]he way to create cities that everyone can traverse without fear of surveillance and exploitation is to democratize the development and control of smart city technology.

While Chicago was developing a project to install hundreds of sensors throughout the city to track air quality, traffic and temperature, it also held public meetings and released policy drafts to promote a City-wide discussion on how to protect personal privacy. According to the Times, this exchange shaped policies that reduced, among other things, the amount of footage that monitoring cameras retained. For its part, Barcelona has modified its municipal procurement contracts with smart cities technology vendors to announce its intentions up front about the public’s ownership and control of personal data.
 
Earlier this year, London and Helsinki announced a collaboration that would enable them to share “best practices and expertise” as they develop their own smart-city systems. A statement by one driver of this collaboration, Smart London, provides the rationale for a robust public exchange:

The successful application of AI in cities relies on the confidence of the citizens it serves.
 
Decisions made by city governments will often be weightier than those in the consumer sphere, and the consequences of those decisions will often have a deep impact on citizens’ lives.
 
Fundamentally, cities operate under a democratic mandate, so the use of technology in public services should operate under the same principles of accountability, transparency and citizens’ rights and safety — just as in other work we do.

To create “an ethical framework for public servants and [a] line-of-sight for the city leaders,” Smart London proposed that citizens, subject matter experts, and civic leaders should all ask and vigorously debate the answers to the following 10 questions:

  • Objective– why is the AI needed and what outcomes is it intended to enable?
  • Use– in what processes and circumstances is the AI appropriate to be used?
  • Impacts– what impacts, good and bad, could the use of AI have on people?
  • Assumptions– what assumptions is the AI based on, and what are their iterations and potential biases?
  •  Data– what data is/was the AI trained on and what are their iterations and potential biases?
  • Inputs– what new data does the AI use when making decisions?
  • Mitigation– what actions have been taken to regulate the negative impacts that could result from the AI’s limitations and potential biases?
  • Ethics– what assessment has been made of the ethics of using this AI? In other words, does the AI serve important, citizen-driven needs as we currently understand those priorities?
  • Oversight– what human judgment is needed before acting on the AI’s output and who is responsible for ensuring its proper use?
  • Evaluation– how and by what criteria will the effectiveness of the AI in this smart-city system be assessed and by whom?

As stakeholders debate these questions and answers, smart-city technologies with broad-based support will be implemented while citizens gain a greater appreciation of the privacy boundaries they are protecting.
 
Eddie Copeland, who described the advantages of smart-city technology above, also urges that steps beyond a city-wide Q&A be undertaken to increase the awareness of what’s at stake and enlist the public’s engagement in the monitoring of these systems.  He argues that democratic methods or processes need to be established to determine whether AI-related approaches are likely to solve a specific problem a city faces; that the right people need to be assembled and involved in the decision-making regarding all smart-city systems; and that this group needs to develop and apply new skills, attitudes and mind-sets to ensure that these technologies maintain their citizen-oriented focus. 
 
As I argued last week, the initial ventilation process takes a long, hard time. Moreover, it is difficult (and maybe impossible) to conduct if negotiations with the technology vendor are on-going or that vendor is “on the clock.”
 
Democracy should have the space and time to be a proactive instead of reactive whenever transformational tech-driven opportunities are presented to the public.

(AP Photo/David Goldman)

2.         A Community’s Conversation Helps Norms to Evolve, One Citizen at a Time

I started this post with the observation that many (if not most) of us initially felt that it was acceptable to trade access to our personal data if the companies that wanted it were providing platforms that offered new kinds of enjoyment or convenience. Many still think it’s an acceptable trade. But over the past several years, as privacy advocates have become more vocal, leading jurisdictions have begun to enact data-privacy laws, and Facebook has been criticized for enabling Russian interference in the 2016 election and the genocide in Myanmar, how we view this trade-off has begun to change.  
 
In a chapter of his new book How Change Happens, legal scholar Cass Sunstein argues that these kinds of widely-seen developments:

can have a crucial and even transformative signaling effect, offering people information about what others think. If people hear the signal, norms may shift, because people are influenced by what they think other people think.

Sunstein describes what happens next as an “unleashing” process where people who never formed a full-blown preference on an issue like “personal data privacy (or were simply reluctant to express it because the trade-offs for “free” platforms seemed acceptable to everybody else), now become more comfortable giving voice to their original qualms. In support, he cites a remarkable study about how a norm that gave Saudi Arabian husbands decision-making power over their wives’ work-lives suddenly began to change when actual preferences became more widely known.

In that country, there remains a custom of “guardianship,” by which husbands are allowed to have the final word on whether their wives work outside the home. The overwhelming majority of young married men are privately in favor of female labor force participation. But those men are profoundly mistaken about the social norm; they think that other, similar men do not want women to join the labor force. When researchers randomly corrected those young men’s beliefs about what other young men believed, they became far more willing to let their wives work. The result was a significant impact on what women actually did. A full four months after the intervention, the wives of men in the experiment were more likely to have applied and interviewed for a job.

When more people either speak up about their preferences or are told that others’ inclinations are similar to theirs, the prevailing norm begins to change.
 
A robust, democratic process that debates the advantages and risks of AI-driven, smart city technologies will likely have the same change-inducing effect. The prevailing norm that finds it acceptable to exchange our behavioral data for “free” tech platforms will no longer be as acceptable as it once was. The more we ask the right questions about smart-city technologies and the longer we grapple as communities with the acceptable answers, the faster the prevailing norm governing personal data privacy will evolve.  
 
Our good work of citizens is to become more knowledgeable about the issues and to champion what is important to us in dialogue with the people who live and work along side of us. More grounds for protecting our personal information are coming out of the smart-cities debate and we are already deciding where new privacy lines should be drawn around us. 

This post was adapted from my July 7, 2019 newsletter. When you subscribe, a new newsletter/post will be delivered to your inbox every Sunday morning.

Filed Under: *All Posts, Being Part of Something Bigger than Yourself, Building Your Values into Your Work, Continuous Learning Tagged With: Ai, artificial intelligence, Cass Sunstein, dataveillance, democracy, how change happens, norms, personal data brokers, personal privacy, privacy, Quayside, Sidewalk Labs, smart cities, Smart City, surveillance capitalism, Toronto, values

Democracy Collides With Technology in Smart Cities

July 1, 2019 By David Griesing Leave a Comment

There is a difference between new technology we’ve already adopted without thinking it through and new technology that we still have the chance to tame before its harms start overwhelming its benefits.
 
Think about Google, Facebook, Apple and Amazon with their now essential products and services. We fell in love with their whiz-bang conveniences so quickly that their innovations become a part of our lives before we recognized their downsides.  Unfortunately, now that they’ve gotten us hooked, it’s also become our problem (or our struggling regulators’ problem) to manage the harms caused by their products and services. 
 
-For Facebook and Google, those disruptions include surveillance dominated business models that compromise our privacy (and maybe our autonomy) when it comes to our consumer, political and social choices.
 
-For Apple, it’s the impact of constant smart phone distraction on young people whose brain power and ability to focus are still developing, and on the rest of us who look at our phones more than our partners, children or dogs.
 
-For these companies (along with Amazon), it’s also been the elimination of competitors, jobs and job-related community benefits without their upholding the other leg of the social contract, which is to give back to the economy they are profiting from by creating new jobs and benefits that can help us sustain flourishing communities.
 
Since we’ll never relinquish the conveniences these tech companies have brought, we’ll be struggling to limit their associated damages for a very long time. But a distinction is important here. 
 
The problem is not with these innovations but in how we adopted them. Their amazing advantages overwhelmed our ability as consumers to step back and see everything that we were getting into before we got hooked. Put another way, the capitalist imperative to profit quickly from transformative products and services overwhelmed the small number of visionaries who were trying to imagine for the rest of us where all of the alligators were lurking.
 
That is not the case with the new smart city initiatives that cities around the world have begun to explore. 
 
Burned and chastened, there was a critical mass of caution (as well as outrage) when Google affiliate Sidewalk Labs proposed a smart-city initiative in Toronto. Active and informed guardians of the social contract are actively negotiating with a profit-driven company like Sidewalk Labs to ensure that its innovations will also serve their city’s long- and short-term needs while minimizing the foreseeable harms.
 
Technology is only as good as the people who are managing it.

For the smart cities of the future, that means engaging everybody who could be benefitted as well as everybody who could be harmed long before these innovations “go live.” A fundamentally different value proposition becomes possible when democracy has enough time to collide with the prospects of powerful, life-changing technologies.

Irene Williams used remnants from football jerseys and shoulder pads to portray her local environs in Strip Quilt, 1960-69

1.         Smart Cities are Rational, Efficient and Human

I took a couple of hours off from work this week to visit a small exhibition of new arrivals at the Philadelphia Museum of Art. 
 
To the extent that I’ve collected anything over the years, it has been African art and textiles, mostly because locals had been collecting these artifacts for years, interesting and affordable items would come up for sale from time to time, I learned about the traditions behind the wood carvings or bark cloth I was drawn to, and gradually got hooked on their radically different ways of seeing the world. 
 
Some of those perspectives—particularly regarding reduction of familiar, natural forms to abstracted ones—extended into the homespun arts of the American South, particularly in the Mississippi Delta. 
 
A dozen or so years ago, quilts from rural Alabama communities like Gee’s Bend captured the art world’s attention, and my local museum just acquired some of these quilts along with other representational arts that came out of the former slave traditions in the American South. The picture at the top (of Loretta Pettway’s Roman Stripes Variation Quilt) and the others pictures here are from that new collection.
 
One echo in these quilts to smart cities is how they represent “maps” of their Delta communities, including rooflines, pathways and garden plots as a bird that was flying over, or even God, might see them. There is rationality—often a grid—but also local variation, points of human origination that are integral to their composition. As a uniquely American art form, these works can be read to combine the essential elements of a small community in boldly stylized ways. 
 
In their economy and how they incorporate their creator’s lived experiences, I don’t think that it’s too much of a stretch to say that they capture the essence of community that’s also coming into focus in smart city planning.
 
Earlier this year, I wrote about Toronto’s smart city initiative in two posts. The first was Whose Values Will Drive Our Future?–the citizens who will be most affected by smart city technologies or the tech companies that provide them. The second was The Human Purpose Behind Smart Cities. Each applauded Toronto for using cutting edge approaches to reclaim its Quayside neighborhood while also identifying some of the concerns that city leaders and residents will have to bear in mind for a community supported roll-out. 
 
For example, Robert Kitchin flagged seven “dangers” that haunt smart city plans as they’re drawn up and implemented. They are the dangers of taking a one-size-fits-all-cities approach; assuming the initiative is objective and “scientific” instead of biased; believing that complex social problems can be reduced to technology hurdles; having smart city technologies replacing key government functions as “cost savings” or otherwise; creating brittle and hackable tech systems that become impossible to maintain; being victimized as citizens by pervasive “dataveillance”; and reinforcing existing power structures and inequalities instead of improving social conditions.
 
Google’s Sidewalk Labs (“Sidewalk”) came out with its Master Innovation and Development Plan (“Plan”) for Toronto’s Quayside neighborhood this week. Unfortunately, against a rising crescendo of outrage over tech company surveillance and data privacy over the past 9 months, Sidewalk did a poor job of staying in front of the public relations curve by regularly consulting the community on its intentions. The result has been rising skepticism among Toronto’s leaders and citizens about whether Sidewalk can be trusted to deliver what it promised.
 
Toronto’s smart cities initiative is managed by an umbrella entity called Waterfront Toronto that was created by the city’s municipal, provincial and national governments. Sidewalk also has a stake in that entity, which has a high-powered board and several advisory boards with community representatives.

Last October one of those board members, Ann Cavoukian, who had recently been Ontario’s information and privacy commissioner, resigned in protest because she came to believe that Sidewalk was reneging on its promise to render all personal data anonymous immediately after it was collected. She worried that Sidewalk’s data collection technologies might identify people’s faces or license plates and potentially be used for corporate profit, despite Sidewalk’s public assurance that it would never market citizen-specific data. Cavoukian felt that leaving anonymity enforcement to a new and vaguely described “data trust” that Sidewald intended to propose was unacceptable and that other“[c]itizens in the area don’t feel that they’ve been consulted appropriately” about how their privacy would be protected either.
 
This April, a civil liberties coalition sued the three Canadian governments that created Waterfront Toronto over privacy concerns which appeared premature because Sidewalk’s actual Plan had yet to be submitted. When Sidewalk finally did so this week, the governments’ senior representative at Waterfront Toronto publically argued that the Plan goes “beyond the scope of the project initially proposed” by, among other things, including significantly more City property than was originally intended and “demanding” that the City’s existing transit network be extended to Quayside. 
 
Data privacy and surveillance concerns also persisted. A story this week about the Plan announcement and government push-back also included criticism that Sidewalk “is coloring outside the lines” by proposing a governance structure like “the data trust” to moderate privacy issues instead of leaving that issue to Waterfront Toronto’s government stakeholders. While Sidewalk said it welcomed this kind of back and forth, there is no denying that Toronto’s smart city dreams have lost a great deal of luster since they were first floated.
 
How might things have been different?
 
While it’s a longer story for another day, some years ago I was project lead on importing liquefied natural gas into Philadelphia’s port, an initiative that promised to bring over $1 billion in new revenues to the city. Unfortunately, while we were finalizing our plans with builders and suppliers, concerns that the Liberty Bell would be taken out by gas explosions (and other community reactions) were inadequately “ventilated,” depriving the project of key political sponsorship and weakening its chances for success. Other factors ultimately doomed this LNG project, but consistently building support for a project that concerned the commmunity certainly contributed. Despite Sidewalk’s having a vaunted community consensus builder in Dan Doctoroff at its helm, Sidewalk (and Google) appear to be fumbling this same ball in Toronto today.
 
My experience, along with Doctoroff’s and others, go some distance towards proving why profit-oriented companies are singularly ill-suited to take the lead on transformative, community-impacting projects. Why?  Because it’s so difficut to justify financially the years of discussions and consensus building that are necessary before an implementation plan can even be drafted. Capitalism is efficient and “economical” but democracy, well, it’s far less so.
 
Argued another way, if I’d had the time and funding to build a city-wide consensus around how significant new LNG revenues would benefit Philadelphia’s residents before the financial deals for supply, construction and distribution were being struck, there could have been powerful civic support built for the project and the problems that ultimately ended it might never have materialized. 
 
This anecdotal evidence from Toronto and Philadelphia begs some serious questions: 
 
-Should any technology that promises to transform people’s lives in fundamental ways (like smart cities or smart phones) be “held in abeyance” from the marketplace until its impacts can be debated and necessary safeguards put in place?
 
-Might a mandated “quiet period“ (like that imposed by regulators in the months before public stock offerings) be better than leaving tech companies to bomb us with seductive products that make them richer but many of us poorer because we never had a chance to consider the fall-out from these products beforehand?
 
-Should the economic model that brings technological innovations with these kinds of impacts to market be fundamentally changed to accommodate advance opportunities for the rest of us to learn what the necessary questions are, ask them and consider the answers we receive?

Mama’s Song, Mary Lee Bendolph

3.         An Unintended but Better Way With Self-Driving Cars

I can’t answer these questions today, but surely they’re worth asking and returning to.
 
Instead, I’m recalling some of the data that is being accumulated today about self-driving/autonomous car technology so that the impacted communities will have made at least some of their moral and other preferences clear long before this transformative technology has been brought to market and seduced us into dependency upon it. As noted in a post from last November:

One way to help determine what the future should look like and how it should operate is to ask people—lots of them—what they’d like to see and what they’re concerned about…In the so-called Moral Machine Experiment, these researchers asked people around the world for their preferences regarding the moral choices that autonomous cars will be called upon to make so that this new technology can match human values as well as its developer’s profit motives.

For example, if a self-driving car has to choose between hitting one person in its way or another, should it be the 6-year old or the 60-year old? People in different parts of the world would make different choices and it takes sustained investments of time and effort to gather those viewpoints.

If peoples’ moral preferences can be taken into account beforehand, the public might be able to recognize “the human face” in a new technology from the beginning instead of having to attempt damage control once that technology is in use.

Public advocates, like those in Toronto who filed suit in April, and the other Cassandras identifying potential problems also deserve a hearing.  Every transformative project’s (or product’s or service’s) dissenters as well as its proponents need opportunities to persuade those who have yet to make up their minds about whether the project is good for them before it’s on the runway or already taken off. 

Following their commentary and grappling with their concerns removes some of the dazzle in our [initial] hopes and grounds them more firmly in reality early on.

Unlike the smart city technology that Sidewalk Labs already has for Toronto, it’s only recently become clear that the artificial intelligence systems behind autonomous vehicles are unable to make the kinds of decisions that “take into mind” a community’s moral preferences. In effect, the rush towards implementation of this disruptive technology was stalled by problems with the technology itself. But this kind of pause is the exception not the rule. The rush to market and its associated profits are powerful, making “breathers to become smarter” before product launches like this uncommon.
 
Once again, we need to consider whether such public ventilation periods should be imposed. 
 
Is there any better way to aim for the community balance between rationality and efficiency on the one hand, human variation and need on the other, that was captured by some visionary artists from the Mississippi delta?
 

+ + + 


Next week, I’m thinking about a follow-up post on smart cities that uses the “seven dangers” discussed above as a springboard for the necessary follow-up questions that Torontonians (along with the rest of us) should be asking and debating now as the tech companies aim to bring us smarter and better cities. In that regard, I’d be grateful for your thoughts on how innovation can advance when democracy gets involved.

This post was adapted from my June 30, 2019 newsletter. When you subscribe, a new newsletter/post will be delivered to your inbox every Sunday morning.

Filed Under: *All Posts, Being Part of Something Bigger than Yourself, Continuous Learning Tagged With: capitalism, community outreach, democracy, dissent, Gees Bend quilts, Google, innovation, Quayside, Sidewalk Labs, smart cities, technology, tension between capitalism and democracy, Toronto, transformative technology

  • « Previous Page
  • 1
  • …
  • 14
  • 15
  • 16
  • 17
  • 18
  • …
  • 48
  • Next Page »

About David

David Griesing (@worklifeward) writes from Philadelphia.

Read More →

Subscribe to my Newsletter

Join all the others who have new posts, recommendations and links to explore delivered to their inboxes every week. Please subscribe below.

David Griesing Twitter @worklifereward

My Forthcoming Book

WordLifeReward Book

Search this Site

Recent Posts

  • The Democrat’s Near-Fatal “Boys & Men” Problem June 30, 2025
  • Great Design Invites Delight, Awe June 4, 2025
  • Liberating Trump’s Good Instincts From the Rest April 21, 2025
  • Delivering the American Dream More Reliably March 30, 2025
  • A Place That Looks Death in the Face, and Keeps Living March 1, 2025

Follow Me

David Griesing Twitter @worklifereward

Copyright © 2025 David Griesing. All Rights Reserved.

  • Terms of Use
  • Privacy Policy