David Griesing | Work Life Reward Author | Philadelphia

  • Blog
  • About
    • Biography
    • Teaching and Training
  • Book
    • WorkLifeReward
  • Subscribe to my Newsletter
  • Contact
You are here: Home / Archives for Building Your Values into Your Work

The Discipline of Limits

August 5, 2019 By David Griesing Leave a Comment

Creativity needs limits, like when it happens within a window frame. Accomplishing anything worthwhile needs limits too.  But it’s remarkable how little what we hope to achieve is constrained by our limited resources and true priorities. 
 
When you step outside and it’s 109 degrees in the full bloom of humidity, those limits are dictated by the weather—and it’s a sweaty, grungy mess to fight them. The only workable solution is to pare back, accept what you cannot change and adapt to it. What a relief it was recently to put some of my “To Do List” items into a box called “September.”
 
Our hopes seldom conform to reality however. In watching the presidential candidate debates this week, many of the visions on display seemed unconstrained by limits. Limited tax dollars. Limited attention spans. Limited appetites for 4 more years of disruption.  

30 seconds to respond to a complicated debate question deprives every answer of context, but even when you dig into these politicians’ policy pronouncements, they tend to be individual, grand proposals as opposed to a considered appraisal about how all of their promises could ever be realized. A workable vision says: “I’m going to try to achieve this one big thing with the understanding that I won’t be able to accomplish this, this and that.” Instead, with many candidates, it seems like they want us to believe that they’ll deliver every pie they’re throwing into the sky.
 
In one of the working chapters of my book, I ask readers to use their journals to list the 10 most important things they are working for. As examples, I propose things like: sending your kids to the right school, buying a home of your own, or having enough time each day to play with your dog. Limiting your choices to 10 is the challenge, because what doesn’t make your list is as critical as what does. Since none of us can “have it all,”  the reality (and the magic) comes from how you negotiate the trade-offs.  You are deciding what is most important to you and, by necessity, what is less so.  
 
While thinking about writing to you this week, I came across the following on Twitter. It is from the Collaboration Fund, a venture capital firm based in New York City that describes what it does with a clever tagline:  collaborative = (people x stuff) + new technologies ^ creativity.  On July 24, @collabfund posted these alternate ways of feeling rich:

Alt forms of rich: 

You can go to bed and wake up when you want to.

 
You can buy any book you want. 
 
You have time to read those books. 
 
You have time to exercise. 
 
A short commute. 
 
No dress code seven days a week. 
 
Liberal use of the thermostat.

Somebody @collabfund was using the month of August to look at what is important to them, and (it seems) what is less so.

When a motion detector senses someone passing by, the balls move in an accompanying wave motion. Dutch studio Staat designed this window (along with the others pictured here) for Nike at Selfridges in London.

Department store windows present endless opportunities to maximize creativity within limits. It’s what’s included and everything that’s not.

By necessity, you have to tell your story within the window’s frame, and one of the genius elements in this Nike campaign is how they’ve interactively included everyone who is walking by in their frames. The image on the top of the page shows the designers inviting passersby to stand on the illuminated spot, jump as high as they can, and see their effort—in comparison to every other effort—light up on the window’s scoreboard. The step-by-step wave action of the colored balls above is another demonstration of how much can be accomplished within the farther limits of a street-facing window.

This is also the genius in every values framework, whether it belongs to Montaigne, Spinoza or Henry Adams (three that resonate strongly with me) or in the working equivalents that everyone who grapples with a “most important to me” exercise comes up with. It’s a winnowing of priorities.
 
Frameworks like this generally start with one or two key values that dictate the kinds of things that people identify as foundational for them. If you start with a value like “personal freedom,” certain priorities tend to  follow. “Material security” would have you pursuing a very different set of goals with your limited time and effort, while “a healthy world” would yield other benchmarks, like lots of exercise and time outside..

Once you adopt a values framework that fits you, its discipline imposes the necessary limits. If I am focusing my energy and resources on this, I either cannot do this other thing at all, or have already accepted that it will be accomplished with the time and energy I have left. When it’s operational, a values framework imposes its equilibrium like a 109-degree day.
 
One of the reasons I became interested in ethics is that sometimes/ oftentimes I want to accomplish everything I’ve put on my plate. It’s a recipe for meltdown because you’re always behind your own 8-ball. So at the same time that I’m drawn to ambitious people with compelling visions, I’ve learned the hard way to be skeptical when it sounds like too many dreams and too little reality.
 
This hard-fought wisdom is why I loved parts of Christopher Demuth’s recent speech to conservative policy makers. Demuth is a distinguished fellow at the Hudson Institute, an organization that “challenges conventional thinking and helps manage strategic transitions to the future.”
Demuth’s speech was about the advantages of nationalism, and I found that it effectively challenged my more conventional thinking, particularly when he said:

An important virtue of the nation-state is that it is a constraint. The contemporary peaceable nation takes what it is given—its borders and territory and resources, its citizens and tribes, its affinities and antagonisms, its history and traditions and ways of getting along—and makes the most of them….
 
One of the most arresting features of modern life in the rich democracies is the pervasive rejection of the idea of natural constraint. One sees this throughout culture high and low, social relations, and politics and government. Where a boundary exists, it is there to be transgressed. Where a hardship exists, it must be because of an injustice, which we can remedy if only we have the will. Today’s recipe for success and happiness is not to manage within limits and accommodate constraints, but to keep one’s options open….
 
I do not know where this impulse came from. Perhaps wealth and technology have relieved so many age-old constraints that we have come to imagine we can live with no constraint at all. Whatever the cause, it is a revolt against reality. Resources are limited. Lasting achievement is possible only within a structure. My own favorite field, economics, is out of favor these days, but it has at least one profound truth, that of opportunity cost: Everything we do necessarily involves not doing something else….
 
The American nation-state is rich, powerful and less constrained than any other, yet it is much more constrained than we have led ourselves to believe. Thinking of ourselves as a nation-state is, as Peter Thiel has observed, a means of unromantic self-knowledge. National conservatism, by directing our attention to our nation as it is—warts, wonders and all—is a means of reminding ourselves of our dependence on one another in the here and now, and of facing up to the constraints that are the sources of productive freedom.

Does this mean we can’t aspire to do better within our national boundariess? Of course not. At the end of the day, it simply means that in governance—as in our lives and work—it is a question of advancing productively on what’s most important to us much closer to home.

In this regard, I’d argue that those who aspire widely and dream expansively in politics are like the dissenters in my June 3 and June 10, 2018 posts, pulling “the less-decided middle” in their direction. On the other hand, these dissenters are rarely the ones who can also bring enough citizens together so we can move forward and actually accomplish something. That kind of consensus building requires a different skill set entirely, and it’s that unromantic man or woman who can help us “manage within our limits” that I’m hoping to find in the current crop of candidates.
 
As for my priorities as a citizen, I believe that above all else the franchise that is American democracy needs to be re-built. While one exception for me to Demuth’s nation-focus would be climate-change (given its global reach and implications), my over-arching citizen priority is to rebuild this country’s internal dynamics. To do so, economic policies that actively support thriving communities and families need to be implemented, with active contributions coming from businesses and employers in the ways that they used to after World War II. I agree with several of Oren Cass’s and Shoshana Zuboff’s observations in these regards, discussing his ideas in A Winter of Work Needs More Color and both of their ideas in The Social Contract Around Our Work is Broken, posts from earlier this year.

Within the frame of thriving communities and families, we’d go to work rebuilding our public infrastructure of roads, bridges, dams, harbors, airports and other mass transit because we depend on our built environment everyday–and it’s falling apart. Our social foundations could also be strengthened in various ways by new, clean energy policies.

Wall Street Journal columnist Greg Ip recently argued for pursuing cleaner energy through market mechanisms like taxes and emissions caps instead of massive government interventions that gamble (often wrongly) when picking winners in a complex marketplace like this and end up being many times more costly.

To many Green New Deal advocates [who want to eliminate fossil fuels altogether in the short term and throw the full weight of the government behind that effort], this isn’t good enough. Replacing coal with natural gas only reduces carbon-dioxide emissions; it doesn’t eliminate them. [However,] this misses the point. The climate doesn’t care if we eliminate a ton of carbon dioxide by replacing coal with natural gas or solar power. But taxpayers and consumers do care. So long as money is limited, each dollar should purchase the largest emission reduction possible. And the market will always be vastly better at this than regulators because it will find solutions that regulators have never thought of.

Once again, his argument is for acting as productively as possible within our limits.  Not only will taxpayers appreciate the thrift in this approach, but with the appropriate policy signals, more local jobs can also be created as clean energy companies learn how to grow within these new policy boundaries, producing financial benefits (like fuller employment and better jobs) along with non-financial ones (like confidence, optimism and greater well-being) for American communities and families.
 
In the end, my priorities may not be your priorities—and they don’t have to be. What’s essential is deciding “what’s more and less important to you” and identifying leaders who will practice the art of the possible while realizing your shared priorities.

It’s a discipline of limits that frames the good work of every citizen.

At the point of this extravagant plume: one of Nike’s sneakers.

If you’re interested in the creativity within limits that was achieved in the window displays that Staat designed for Nike, here is a link to a video that shows some of their interactive elements.
 
The book I’ve been writing is an extended conversation on how your work ethic determines the work that you do (and don’t do) and how working within the limits of your priorities energizes your life, even when you’re not working.

Thanks for your reactions to these posts every week. Thanks too for continuing to recommend this newsletter to friends and colleagues. I’ll see you all next week.

This post was adapted from my August 4, 2019 newsletter. When you subscribe, a new newsletter/post will be delivered to your inbox every Sunday morning.


Filed Under: *All Posts, Building Your Values into Your Work Tagged With: Christopher Demuth, citizen work, creativity within limits, ethical framework, ethics, Greg Ip, limits, priorities, values, work ethic, work priorities

We’re All Acting Like Dogs Today

July 29, 2019 By David Griesing Leave a Comment

Saul Steinberg in the New Yorker, January 12, 1976

I recently read that dogs—through the imperatives of evolution—have developed expressions that invite positive interactions from their humans like “What a good boy” or a scratch behind the ears whenever, say, Wally looks me in the eye with a kind of urgency. There’s urgency all right, because he’s after something more than just my words or my touch. 
 
The reward that he really wants comes after this hoped-for interaction. It’s a little squirt of oxytocin, a hormone and neuropeptide that strengthens bonding by making me, and then both of us together, feel good about our connection.
 
As you might expect, Wally looks at me a lot when I’m working and I almost always respond. How could anyone refuse that face? Besides, it gives whatever workspace I’m in a positive charge that can linger all day.

Social media has also learned that “likes”—or almost any kind of interaction by other people (or machines) with our pictures and posts—produces similar oxytocin squirts in those who are doing the posting. We’re not just putting something out there; we’re after something that’s both measureable and satisfying in return.

Of course, the caution light flashes when social media users begin to crave bursts of chemical approval like Wally does, or “to feel rejected” when the “likes” aren’t coming fast enough. It’s a feedback loop “of craving to approval” that keeps us coming back for more. Will they like us at least as much, and maybe more than they did the last time I was here?  It’s the draw that always makes us stay on these social media platforms for longer than we want to and always keeps us coming back for more.

Social scientists have been telling us for years that craving approval for our contributions (along with not wanting to miss out) causes social media as well as cell-phone addiction in young people under 25. They are particularly susceptible to its lures because the pre-frontal cortex in their brains, the so-called seat of good judgment, is still developing. Of course, the ability to determine what’s good and bad for you is also underdeveloped in many older people too—I just never thought that included me.

So how I felt when I stopped my daily posting on Instagram three weeks ago came as a definite comeuppance. Until then I thought I had too much “good sense” to allow myself to be manipulated in these ways.

For the past 6 years, I’ve posted a photo on Instagram (or IG) almost every day. I told myself that regular picture-taking would make me look at the world more closely while, at the same time, making me better at capturing what I saw. It would give me a cache of visual memories about where I’d been and what I’d been doing, and posting on IG gave me a chance to share them with others.

In recent years, I’d regularly get around 50 “likes” for each photo along with upbeat comments from strangers in Yemen, Moscow and Beruit as well as from people I actually know. The volume and reach of approval wasn’t great by Rhianna standards, but as much as half of it would always come in the first few minutes after posting every day. I’d generally upload my images before getting out of bed in the morning, so for years now I’ve been starting my days with a series of “feel good” oxytocin bursts.

Of course, you know what happened next. My “cold turkey” from Instagram produced symptoms that felt exactly like withdrawal. It recalled the aftermath of cutting back on carbs a few years back or, after I was in the Coast Guard, nicotine. Noticeable. Physical. In the days that followed, I’d find myself repeatedly gazing over at my phone screen for notifications of likes or comments that were no longer coming. Or even worse, I’d explore identical-looking notifications for me to check other people’s pictures and stories, lures that felt like reminders of the boosts I was no longer getting. I felt “cut off” from something that had seemed both alive and necessary.

It’s one thing to read about social media or cell-phone addiction and accept it’s downsides as a mental exercise, quite another to feel withdrawal symptoms after quitting one of them.

Unlike the Food & Drug Administration, I did’t need anything more than my own clinical trial to tell me about the forces that were at play here, because at the same time that IG owner Mark Zuckerberg is engineering what feels like my addiction to his platform, he is also targeting me with ads for things (that I’m sorry to say) I realized I was wanting much more frequently. That’s because Instagram was learning all along what I was interested in whenever I hovered over one of its ads or followed an enticing link.

In other words, I’d been addicted to soften me up for buying stuff that IG had learned I’m likely to want in a retail exchange that effectively made both IG and Mark Zuckerberg the middleman in every sale. IG’s oxcytocin machine had turned me into a captive audience who’d been intentionally rendered susceptible to buying whatever IG was hawking. 

That seems both manipulative and underhanded to me.

It’s one thing to write about “loss of autonomy” to the on-line tech giants, it is another to have felt a measure of that loss.

So where does this leave me, or any of us?

How do lawmakers and regulators limit (or prevent) subtle but nonetheless real chemical dependency when it’s induced by a tech platform?

Is breaking the ad-based business models that turn so many of us into captive buyers even possible in a market system that has used advertising to stoke sales for more than 200 years? Can our consumer-oriented economy turn its back on what may be the most effective sales model ever invented?

To think that we are grappling with either of these questions today would be an illusion.

The U.S. Federal Trade Commission has just fined Facebook (which is IG’s owner) for failing to implement and enforce narrow privacy policies that it had promised to implement and enforce years ago. The FTC also mandated oversight of Zuckerberg personally. Unlike the CEOs of other public companies, because he has effective ownership control of Facebook, his board of directors can’t really hold his feet to the fire. But neither the fine nor this new oversight mechanism challenge the company’s underlying business model, which is to (1) induce an oxytocin dependency in its users; (2) gather their personal data while they are feeling good by satisfying their cravings; (3) sell their personal data to advertisers; and (4) profit from the ads that are aimed at users who either don’t know or don’t care that they are being seduced in this way.

Recently announced antitrust investigations are also aimed at different problems. The Justice Department, FTC and Congress will be questioning the size of companies like Facebook and their dominance among competitors. One remedy might break Facebook into smaller pieces (like undoing it’s 2012 purchase of Instagram). However, these investigations are not about challenging a business model that induces dependency in its users, eavesdrops on their personal behavior both on-site and off of it, and then turns them into consumers of the products on its shelves. The best that can be hoped for is that some of these dominant platforms may be cut down to size and have some of their anti-competitive practices curtailed.  

Even the data-privacy initiatives that some are proposing are unlikely to change this business model. Their most likely result is that users who want to restrict access to, and use of, their personal information will have to pay for the privilege of utilizing Facebook or Google or migrate to new privacy-protecting platforms that will be coming on-line. I profiled one of them, called Solid, on this page a few weeks back.

Since it looks like we’ll be stuck in this brave new world for awhile, why does it matter that we’re being misused in this way?

Personal behavior has always been influenced by whatever “the Jones” were buying or doing next door (if you were desperate enough to keep up with them). In high school you changed what you were wearing or who you were hanging out with if you wanted to be seen as one of the cool kids.  Realizing that your hero, James Bond, is wearing an Omega watch might make you want to buy one too. But the influence to buy or to imitate that I’m describing here with Instagram feels new, different and more invasive, like we’ve entered the realm of science fiction.

Social media companies like Facebook and Instagram are using psychological power, that we’ve more or less given them, to remove some of the freedom in our choices so that they, in turn, can make Midas kingdoms of money off of us. And perhaps their best trick of all is that you only feel the ache of dependency that kept you in their rabbit holes—and how they conditioned you to respond once you were in them—after you decide to leave.

Saul Steinberg in the New Yorker, November 16, 1968

Maybe the scariest part of this was my knowing better, but acquiescing anyway, for all of those years. 
 
It’s particularly alarming given my belief that autonomy (along with generosity) are the most important qualities that I have.
 
I guess I had to feel what had happened to me in order to understand the subtlety of my addiction, the loss of freedom that my cravings for connection had induced, and my susceptibility to being used, against my will, by strangers for their own, very different purposes.
 
By delivering “warm and fuzzies” every day and getting me to stay for their commercials, Instagram became my small experience of mind control and Big Brother.
 
Over the past few weeks, I see people looking for something in their phones and think differently about what they’re doing. That’s because I still feel some of the need for what they may be looking for too.
 
It gives a whole new meaning to “the dog days” this summer.

+ + +

I’d love to hear from you if you’ve had a similar experience with a social network like Facebook or Instagram. If we don’t end up talking before then, I’ll see you next week.

This post was adapted from my July 28, 2019 newsletter. When you subscribe, a new newsletter/post will be delivered to your inbox every Sunday morning.

Filed Under: *All Posts, Building Your Values into Your Work, Daily Preparation, Using Humor Effectively, Work & Life Rewards Tagged With: addiction and withdrawal, addiction to social media, Big Brother, dog days, facebook, Instagram, manipulation, mind control, oxytocin, prevention, regulation, safeguards, Saul Steinberg, seat of good judgment, social media, social networks

A Course Correction for the World Wide Web

July 15, 2019 By David Griesing Leave a Comment

Pink shock and emerald green in the back yard

Emily was here for breakfast on Thursday and I had the morning’s news on public radio—the same stories staring at me from the front page of my newspaper—and she said with millennial weariness: Why are you listening to that?
 
It was a good question, and one I often answer for myself by turning it off because it’s mostly journalist shock, outrage or shame about whatever the newsmakers think is going on. Who needs their sense of urgency in those first moments when you’re still trying to figure out whether you’re fully conscious or even alive?
 
On the other hand, short ventures into my yard quickly provide more hopeful messages. It’s the early summer flush, fueled by plenty of rain, and everything is still emerald green. Summer is telling different stories than the radio, sees different horizons, including the one some kind of watermelon sprawl is trying to reach with its tentacles. These co-venturers aren’t fretting about the future, they’re claiming it by inches and feet, or celebrating it with explosions in the air.
 
While shock, outrage or shame can push you to do good work, it’s hope that sustains it by giving it directions, goals, and better horizons. Everything around the creeping reality of surveillance capitalism tiggers all those negative feelings and keeps me snapping at its purveyors with my canines because—well—because it deserves to be pierced and wounded.
 
But then what?
 
That’s where others who have shared these angry and disgusted reactions start showing me more hopeful responses in their own good work–the productive places where gut reaction sometimes enable you to go–and that my radio provides little if any of (ok, so now what?) on most mornings. 

In the early days of the internet, the geeks and tinkerers in their basements and garages had utopian dreams for this new way of communicating with one another and sharing information. In the thirty-odd-years that have followed, many of those creative possibilities have been squandered. What we’ve gotten instead are dominant platforms that are fueled by their sale of our personal data. They have colonized and monetized the internet not to share its wealth but to hoard whatever they can take for themselves.
 
One would be right in thinking that many of the internet’s inventors are horrified by these developments, that some of them have expressed their shock, outrage and shame, and that a few have ridden these emotions into a drive to find better ways to utilize this world-changing technology. Perhaps first among them is Tim Berners-Lee.

Like some of my backyard’s denizens, he’s never lost sight of the horizons that he saw when he first poked his head above the ground. He also feels responsible for helping to set right what others have gotten so woefully wrong after he made his first breathtaking gift to us thirty years ago.

Angel trumpets

1.         The Inventor of the Internet

At one point the joke was that Al Gore had invented the internet, but, in fact, it was Tim Berners-Lee. It’s been three decades since he gathered the critical components, linked them together, and called his creation “the world wide web.” Today however, he’s profoundly disconcerted by several of the directions that his creation has taken and he aims to do something about it.
 
In 1989, Berners-Lee didn’t sell his original web architecture and the protocols he assembled or attempt to get rich from them. He didn’t think anyone should own the internet, so no patents were ever gotten or royalties sought. The operating standards, developed by a consortium of companies he convened, were also made available to everyone, without cost, so the world wide web could be rapidly adopted. In 2014, the British Council asked prominent scientists, academics, writers and world leaders to chose the cultural moments that had shaped the world most profoundly in the previous 80 years, and they ranked the invention of the World Wide Web number one. This is how they described Berners-Lee’s invention:

The fastest growing communications medium of all time, the internet has changed the shape of modern life forever. We can connect with each other instantly, all over the world.

Because he gave it away with every good intention, perhaps Berners-Lee has more reasons than anyone to be concerned about the poor use that others have made of it. Instead of remaining the de-centralized communication and information sharing platform he envisioned, the internet still isn’t available everywhere, has frequently been weaponized, and is increasingly controlled by a few dominant platforms for their own private gain. But he’s also convinced that these ill winds can be reversed.
 
He reads and shares an open letter every year on the anniversary of the internet’s creation. His March 2018 and March 2019 letters lay out his primary concerns today. 
 
Last year, Berners-Lee renewed his commitment “to making sure the web is a free, open, creative space – for everyone. That vision is only possible if we get everyone online, and make sure the web works for people [instead of against them].” After making proposals that aim to expand internet access for the poor (and for poor women and girls in particular), he discusses various ways that the web has failed to work “for us.”

What was once a rich selection of blogs and websites has been compressed under the powerful weight of a few dominant platforms. This concentration of power creates a new set of gatekeepers, allowing a handful of platforms to control which ideas and opinions are seen and shared….the fact that power is concentrated among so few companies has made it possible to weaponise the web at scale. In recent years, we’ve seen conspiracy theories trend on social media platforms, fake Twitter and Facebook accounts stoke social tensions, external actors interfere in elections, and criminals steal troves of personal data.

Additionally troubling is the fact that we’ve left these same companies to police themselves, something they can never do effectively given their incentives to maximize profits instead of social goods. “A legal or regulatory framework that accounts for social objectives may help ease those tensions,” he says.
 
Berners-Lee sees a similar misalignment of incentives between the tech giants and the users they have herded into their platforms.

Two myths currently limit our collective imagination: the myth that advertising is the only possible business model for online companies, and the myth that it’s too late to change the way platforms operate. On both points, we need to be a little more creative.
 
While the problems facing the web are complex and large, I think we should see them as bugs: problems with existing code and software systems that have been created by people – and can be fixed by people. Create a new set of incentives and changes in the code will follow. …Today, I want to challenge us all to have greater ambitions for the web. I want the web to reflect our hopes and fulfill our dreams, rather than magnify our fears and deepen our divisions.
 
As the late internet activist, John Perry Barlow, once said: “A good way to invent the future is to predict it.” It may sound utopian, it may sound impossible to achieve… but I want us to imagine that future and build it.

In March, 2018, most of us didn’t know what Berners-Lee had in mind when he talked about building.
 
This year’s letter mostly elaborated on last year’s themes. In addition to governments “translating laws and regulations for the digital age,” he calls on the tech companies to be a constructive part of the societal conversation (while never mentioning the positive role that their teams of Washington lobbyists might play). In other words, it’s more of a plea or attempt to shame them into action since their profits instead of their public interest remain their primary motivators. It is also unclear what he expects from government leaders and regulators as politics becomes more polarized, but he is plainly calling on the web’s theorizers, inventors and commentators and on its billions of users to pitch in and help. 
 
Berners-Lee proposes a new Contract for the Web, a global collaboration that was launched in Lisbon last November. His Web Summit brought together those:

who agree we need to establish clear norms, laws and standards that underpin the web. Those who support it endorse its starting principles and together we are working out the specific commitments in each area. No one group should do this alone, and all input will be appreciated. Governments, companies and citizens are all contributing, and we aim to have a result later this year.

It’s like the founding spiritual leader convening the increasingly divergent members of his flock before setting out on the next leg of the journey.

The web is for everyone, and collectively we hold the power to change it. It won’t be easy. But if we dream a little and work a lot, we can get the web we want.

In the meantime however, while a new Contract for the Web is clearly necessary, it is not where Berners-Lee is pinning all of his hopes.

The seed came from somewhere and now it’s (maybe) making watermelons

2.         An App for an App

The way that the internet was created, any webpage should be accessible from any device that has a web browser, including a smart phone, a personal computer or even an internet-enabled refrigerator. That kind of free access is blocked, however, when the content or the services are locked inside an app and the app distributor (such as Google or Facebook) controls where and how users interact with “what’s inside.” As noted recently in the Guardian: “the rise of the app economy fundamentally bypasses the web, and all the principles associated with it, of openness, interoperability and ease of access.”
 
On the other hand, perhaps the web’s greatest strength has been the ability of almost anyone to build almost anything on top of it. Since Berners-Lee built the web’s foundation and its first couple of floors, he’s well-positioned to build an alternative that provides the openness, interoperability and ease of access that has been lost while also serving the public’s interest in principles like personal data privacy. At the same time that he has been sponsoring a global quest for new standards to govern the internet, Berner-Lee has also been building an alternative infrastructure on top of the internet’s common foundation.
 
One irony is that he’s building it with a new kind of app.
 
Last September, Berners-Lee announced a new, open-source web-based infrastructure called Solid that he has been working on quietly with colleagues at MIT for several years. “Open-source” means that once the rudimentary structures are made public, anyone can contribute to that infrastructure’s web-based applications. Making the original internet free and widely available lead to its rapid adoption and Berners-Lee is plainly hoping that “open source” will have the same impact on Solid. Shortly after his announcement, an article in Tech Crunch reported that open-source developers were already pouring into the Solid platform “in droves.” As Fast Company reported at the time: Berner-Lee’s objective for Solid, and the company behind it called Inrupt, was “to turbocharge a broader movement afoot, among developers around the world, to decentralize the web and take back power from the forces that have profited from centralizing it.”  Like a second great awakening.
 
First and foremost, the Solid web infrastructure is intended to give people back control of their personal data on-line. Every data point that’s created in or added to a Solid software application exists in a Solid “pod,” which is an acronym for “personal on-line data store” that can be kept on Solid’s server or anywhere else that a user chooses. Berners-Lee previewed one of the first Solid apps for the Fast Company reporter after his new platform was announced:

On his screen, there is a simple-looking web page with tabs across the top: Tim’s to-do list, his calendar, chats, address book. He built this app–one of the first on Solid–for his personal use. It is simple, spare. In fact, it’s so plain that, at first glance, it’s hard to see its significance. But to Berners-Lee, this is where the revolution begins. The app, using Solid’s decentralized technology, allows Berners-Lee to access all of his data seamlessly–his calendar, his music library, videos, chat, research. It’s like a mashup of Google Drive, Microsoft Outlook, Slack, Spotify, and WhatsAp.

The difference is that his (or your) personal information is secured within a Solid pod from others who might seek to make use of it in some way.
 
Inrupt is the start-up company that Berners-Lee and John Bruce launched to drive development of Solid, secure the necessary funding and transform Solid from a radical idea into a viable platform for businesses and individuals. According to Tech Crunch, Inrupt is already gearing up to work on a new digital assistant called Charlie that it describes as “a decentralized version of Alexa.”
 
What will success look like for Inrupt and Solid? A Wired magazine story last February described it this way:

Bruce and Berners-Lee aren’t waiting for the current generation of tech giants to switch to an open and decentralised model; Amazon and Facebook are unlikely to ever give up their user data caches. But they hope their alternative model will be adopted by an increasingly privacy-aware population of web users and the organisations that wish to cater to them. ‘In the web as we envision it, entirely new businesses, ecosystems and opportunities will emerge and thrive, including hosting companies, application providers, enterprise consultants, designers and developers,’ Bruce says. ‘Everyday web users will find incredible value in new kinds of apps that are impossible on today’s web.

In other words, if we dream a little and work a lot, we can get the web that we want. 

+ + + 

At this stage in his life (Berners-Lee is 64) and given his world-bending accomplishments, he could have retired to a beach or mountaintop somewhere to rest on his laurels, but he hasn’t. Instead, because he can, he heeds the call of his discomfort and is diving back in to champion his original vision. It’s the capability and commitment, hope and action that are the arc of all good work.

Telling him that Solid is a pipe-dream would be like telling my backyard encouragers to stop shouting, trumpeting and fruiting.

This post was adapted from my July 14, 2019 newsletter. When you subscribe, a new newsletter/post will be delivered to your inbox every Sunday morning.

Filed Under: *All Posts, Being Part of Something Bigger than Yourself, Building Your Values into Your Work, Entrepreneurship, Heroes & Other Role Models, Work & Life Rewards Tagged With: acting on hopes, Contract for the Web, data privacy, entrepreneurship, Inrupt, misalignment of incentives, personal online data store, Solid, Tim Berners-Lee

Citizens Will Decide What’s Important in Smart Cities

July 8, 2019 By David Griesing Leave a Comment

The norms that dictate the acceptable use of artificial intelligence in technology are in flux. That’s partly because the AI-enabled, personal data gathering by companies like Google, Facebook and Amazon has caused a spirited debate about the right of privacy that individuals have over their personal information. With your “behavioral” data, the tech giants can target you with specific products, influence your political views, manipulate you into spending more time on their platforms, and weaken the control that you have over your own decision-making.
 
In most of the debate about the harms of these platforms thus far, our privacy rights have been poorly understood.  In fact, our anything-but-clear commitments to the integrity of our personal information have enabled these tech giants to overwhelm our initial, instinctive caution as they seduced us into believing that “free” searches, social networks or next day deliveries might be worth giving them our personal data in return. Moreover, what alternatives did we have to the exchange they were offering?

  • Where were the privacy-protecting search engines, social networks and on-line shopping hubs?
  • Moreover, once we got hooked on to these data-sucking platforms, wasn’t it already too late to “put the ketchup back in the bottle” where our private information was concerned? Don’t these companies (and the data brokers that enrich them) already have everything that they need to know about us?

Overwhelmed by the draw of  “free” services from these tech giants, we never bothered to define the scope of the privacy rights that we relinquished when we accepted their “terms of service.”  Now, several years into this brave new world of surveillance and manipulation, many feel that it’s already too late to do anything, and even if it weren’t, we are hardly willing to relinquish the advantages of these platforms when they are unavailable elsewhere. 
 
So is there really “no way out”?  
 
A rising crescendo of voices is gradually finding a way, and they are coming at it from several different directions.
 
In places like Toronto (London, Helsinki, Chicago and Barcelona) policy makers and citizens alike are defining the norms around personal data privacy at the same time that they’re grappling with the potential fallout of similar data-tracking, analyzing and decision-making technologies in smart-city initiatives.
 
Our first stop today is to eavesdrop on how these cities are grappling with both the advantages and harms of smart-city technologies, and how we’re all learning—from the host of scenarios they’re considering—why it makes sense to shield our personal data from those who seek to profit from it.  The rising debate around smart-city initiatives is giving us new perspectives on how surveillance-based technologies are likely to impact our daily lives and work. As the risks to our privacy are played out in new, easy-to-imagine contexts, more of us will become more willing to protect our personal information from those who could turn it against us in the future.
 
How and why norms change (and even explode) during civic conversations like this is a topic that Cass Sunstein explores in his new book How Change Happens. Sunstein considers the personal impacts when norms involving issues like data privacy are in flux, and the role that understanding other people’s priorities always seems to play. Some of his conclusions are also discussed below. As “dataveillance” is increasingly challenged and we contextualize our privacy interests even further, the smart-city debate is likely to usher in a more durable norm regarding data privacy while, at the same time, allowing us to realize the benefits of AI-driven technologies that can improve urban efficiency, convenience and quality of life.
 
With the growing certainty that our personal privacy rights are worth protecting, it is perhaps no coincidence that there are new companies on the horizon that promise to provide access to the on-line services we’ve come to expect without our having to pay an unacceptable price for them.  Next week, I’ll be sharing perhaps the most promising of these new business models with you as we begin to imagine a future that safeguards instead of exploits our personal information. 

1.         Smart-City Debates Are Telling Us Why Our Personal Data Needs Protecting

Over the past 6 months, I’ve talked repeatedly about smart-city technologies and one of you reached out to me this week wondering:  “What (exactly) are these new “technologies”?”  (Thanks for your question, George!).  
 
As a general matter, smart-city technologies gather and analyze information about how a city functions, while improving urban decision-making around that new information. Throughout, these data-gathering,  analyzing, and decision-making processes rely on artificial intelligence. In his recent article “What Would It Take to Help Cities Innovate Responsibly With AI?” Eddie Copeland begins by describing the many useful things that AI enables us to do in this context: 

AI can codify [a] best practice and roll it out at scale, remove human bias, enable evidence-based decision making in the field, spot patterns that humans can’t see, optimise systems too complex for humans to model, quickly digest and interpret vast quantities of data and automate demanding cognitive activities.

In other words, in a broad range of urban contexts, a smart-city system with AI capabilities can make progressively better decisions about nearly every aspect of a city’s operations by gaining an increasingly refined understanding of how its citizens use the city and are, in turn, served by its managers.
 
Of course, the potential benefits of greater or more equitable access to city services as well as their optimized delivery are enormous. Despite some of the current hew and cry, a smart-cities future does not have to resemble Big Brother. Instead, it could liberate time and money that’s currently being wasted, permitting their reinvestment into areas that produce a wider variety of benefits to citizens at every level of government.
 
Over the past weeks and months, I’ve been extolling the optimism that drove Toronto to launch its smart-cities initiative called Quayside and how its debate has entered a stormy patch more recently. Amidst the finger pointing among Google affiliate Sidewalk Labs, government leaders and civil rights advocates, Sidewalk (which is providing the AI-driven tech interface) has consistently stated that no citizen-specific data it collects will be sold, but the devil (as they say) remains in the as-yet to be disclosed details. This is from a statement the company issued in April:

Sidewalk Labs is strongly committed to the protection and privacy of urban data. In fact, we’ve been clear in our belief that decisions about the collection and use of urban data should be up to an independent data trust, which we are proposing for the Quayside project. This organization would be run by an independent third party in partnership with the government and ensure urban data is only used in ways that benefit the community, protect privacy, and spur innovation and investment. This independent body would have full oversight over Quayside. Sidewalk Labs fully supports a robust and healthy discussion regarding privacy, data ownership, and governance. But this debate must be rooted in fact, not fiction and fear-mongering.

As a result of experiences like Toronto’s (and many others, where a new technology is introduced to unsuspecting users), I argued in last week’s post for longer “public ventilation periods” to understand the risks as well as rewards before potentially transformative products are launched and actually used by the public.
 
In the meantime, other cities have also been engaging their citizens in just this kind of information-sharing and debate. Last week, a piece in the New York Times elaborated on citizen-oriented initiatives in Chicago and Barcelona after noting that:

[t]he way to create cities that everyone can traverse without fear of surveillance and exploitation is to democratize the development and control of smart city technology.

While Chicago was developing a project to install hundreds of sensors throughout the city to track air quality, traffic and temperature, it also held public meetings and released policy drafts to promote a City-wide discussion on how to protect personal privacy. According to the Times, this exchange shaped policies that reduced, among other things, the amount of footage that monitoring cameras retained. For its part, Barcelona has modified its municipal procurement contracts with smart cities technology vendors to announce its intentions up front about the public’s ownership and control of personal data.
 
Earlier this year, London and Helsinki announced a collaboration that would enable them to share “best practices and expertise” as they develop their own smart-city systems. A statement by one driver of this collaboration, Smart London, provides the rationale for a robust public exchange:

The successful application of AI in cities relies on the confidence of the citizens it serves.
 
Decisions made by city governments will often be weightier than those in the consumer sphere, and the consequences of those decisions will often have a deep impact on citizens’ lives.
 
Fundamentally, cities operate under a democratic mandate, so the use of technology in public services should operate under the same principles of accountability, transparency and citizens’ rights and safety — just as in other work we do.

To create “an ethical framework for public servants and [a] line-of-sight for the city leaders,” Smart London proposed that citizens, subject matter experts, and civic leaders should all ask and vigorously debate the answers to the following 10 questions:

  • Objective– why is the AI needed and what outcomes is it intended to enable?
  • Use– in what processes and circumstances is the AI appropriate to be used?
  • Impacts– what impacts, good and bad, could the use of AI have on people?
  • Assumptions– what assumptions is the AI based on, and what are their iterations and potential biases?
  •  Data– what data is/was the AI trained on and what are their iterations and potential biases?
  • Inputs– what new data does the AI use when making decisions?
  • Mitigation– what actions have been taken to regulate the negative impacts that could result from the AI’s limitations and potential biases?
  • Ethics– what assessment has been made of the ethics of using this AI? In other words, does the AI serve important, citizen-driven needs as we currently understand those priorities?
  • Oversight– what human judgment is needed before acting on the AI’s output and who is responsible for ensuring its proper use?
  • Evaluation– how and by what criteria will the effectiveness of the AI in this smart-city system be assessed and by whom?

As stakeholders debate these questions and answers, smart-city technologies with broad-based support will be implemented while citizens gain a greater appreciation of the privacy boundaries they are protecting.
 
Eddie Copeland, who described the advantages of smart-city technology above, also urges that steps beyond a city-wide Q&A be undertaken to increase the awareness of what’s at stake and enlist the public’s engagement in the monitoring of these systems.  He argues that democratic methods or processes need to be established to determine whether AI-related approaches are likely to solve a specific problem a city faces; that the right people need to be assembled and involved in the decision-making regarding all smart-city systems; and that this group needs to develop and apply new skills, attitudes and mind-sets to ensure that these technologies maintain their citizen-oriented focus. 
 
As I argued last week, the initial ventilation process takes a long, hard time. Moreover, it is difficult (and maybe impossible) to conduct if negotiations with the technology vendor are on-going or that vendor is “on the clock.”
 
Democracy should have the space and time to be a proactive instead of reactive whenever transformational tech-driven opportunities are presented to the public.

(AP Photo/David Goldman)

2.         A Community’s Conversation Helps Norms to Evolve, One Citizen at a Time

I started this post with the observation that many (if not most) of us initially felt that it was acceptable to trade access to our personal data if the companies that wanted it were providing platforms that offered new kinds of enjoyment or convenience. Many still think it’s an acceptable trade. But over the past several years, as privacy advocates have become more vocal, leading jurisdictions have begun to enact data-privacy laws, and Facebook has been criticized for enabling Russian interference in the 2016 election and the genocide in Myanmar, how we view this trade-off has begun to change.  
 
In a chapter of his new book How Change Happens, legal scholar Cass Sunstein argues that these kinds of widely-seen developments:

can have a crucial and even transformative signaling effect, offering people information about what others think. If people hear the signal, norms may shift, because people are influenced by what they think other people think.

Sunstein describes what happens next as an “unleashing” process where people who never formed a full-blown preference on an issue like “personal data privacy (or were simply reluctant to express it because the trade-offs for “free” platforms seemed acceptable to everybody else), now become more comfortable giving voice to their original qualms. In support, he cites a remarkable study about how a norm that gave Saudi Arabian husbands decision-making power over their wives’ work-lives suddenly began to change when actual preferences became more widely known.

In that country, there remains a custom of “guardianship,” by which husbands are allowed to have the final word on whether their wives work outside the home. The overwhelming majority of young married men are privately in favor of female labor force participation. But those men are profoundly mistaken about the social norm; they think that other, similar men do not want women to join the labor force. When researchers randomly corrected those young men’s beliefs about what other young men believed, they became far more willing to let their wives work. The result was a significant impact on what women actually did. A full four months after the intervention, the wives of men in the experiment were more likely to have applied and interviewed for a job.

When more people either speak up about their preferences or are told that others’ inclinations are similar to theirs, the prevailing norm begins to change.
 
A robust, democratic process that debates the advantages and risks of AI-driven, smart city technologies will likely have the same change-inducing effect. The prevailing norm that finds it acceptable to exchange our behavioral data for “free” tech platforms will no longer be as acceptable as it once was. The more we ask the right questions about smart-city technologies and the longer we grapple as communities with the acceptable answers, the faster the prevailing norm governing personal data privacy will evolve.  
 
Our good work of citizens is to become more knowledgeable about the issues and to champion what is important to us in dialogue with the people who live and work along side of us. More grounds for protecting our personal information are coming out of the smart-cities debate and we are already deciding where new privacy lines should be drawn around us. 

This post was adapted from my July 7, 2019 newsletter. When you subscribe, a new newsletter/post will be delivered to your inbox every Sunday morning.

Filed Under: *All Posts, Being Part of Something Bigger than Yourself, Building Your Values into Your Work, Continuous Learning Tagged With: Ai, artificial intelligence, Cass Sunstein, dataveillance, democracy, how change happens, norms, personal data brokers, personal privacy, privacy, Quayside, Sidewalk Labs, smart cities, Smart City, surveillance capitalism, Toronto, values

Dissenting Voices Never Fall On Deaf Ears

June 24, 2019 By David Griesing Leave a Comment

Hong Kong could still swallow the dragon. 

A cover story in the Wall Street Journal today was called “Hong Kong’s Flickering Hopes.” “Flickering” because to Gerard Baker, the Journal’s “editor at large,”  it seems inevitable that Hong Kong’s rule of law and civic traditions—it’s utterly unique experiment in Asian democracy—will eventually be swallowed by the giant that surrounds it. 

On the other hand, I think it’s far from inevitable. 
 
Over the past month, millions of residents have taken to their City’s humid boulevards to protest an extradition proposal that would allow China’s resident proxies to arrest anyone in Hong Kong that it wants to, extradicting them to “justice” in the motherland—a “chilling effect” on critical thinking and democratic expression. But why I wonder isn’t the personal witness of millions of Hong Kong Chinese a ray of hope instead of a glimmer on the road to subjugation?
 
Hong Kong could still confound us because, despite having the hottest, wettest and least hospitable climate for masked protest I can imagine, millions of its citizens took to the streets to voice their dissent about this latest erosion of their rights to speak, assemble and disagree. Despite sweat and dehydration, pepper spray and water cannon, they have also managed to protest peacefully so that their resounding “No” conveyed a Confucian depth of confidence and resolve. Constructive instead of destructive.
 
On the other hand, and demonstrating even more of their British influence, the Hong Kong Chinese have talked and written their rationales for “No” everywhere that they could find a platform for doing so—patiently, painstakingly and exhaustingly—although the meat of their dissenting opinions has received little attention in the press. And finally, the City’s residents have sketched out futures that are not merely a return to the status quo that existed twenty years ago but instead are thoughtful re-workings of Deng Xiaoping’s “One Country, Two Systems.” Despite a divergence of details, their over-arching visions have one thing in common. All imagine a unified China that’s built around the pounding, life-giving heart of Hong Kong today.
 
Will (1) their acts of dissent; (2) the personal risks that have been taken by, and moral commitment of these dissenters; together with (3) their hopeful vision of a “different and better outcome” persuade the billions of non-Hong Kong Chinese to reconsider their acquiescence to “the Great Firewall,” the desirability of “good citizenship scores,” and the subjugation of a total surveillance state for something more like what these dissenters have in mind?

Could Hong Kong swallow the dragon?

One thing is absolutely certain: the steady, confident voices of Hong Kong’s Chinese dissenters are not falling on deaf ears. There are more than a billion of them, listening or trying to listen.
 
(In case you don’t recognize it, the image above is of Tankman, a sole protester confronting Chinese state power in Tianamen Square, 1989. The power of dissent. The power in a picture. Inviting us to imagine being that solitary Chinese man.)

Meanwhile, an old service station re-purposed as a coffee joint on Wilshire Boulevard in Los Angeles a couple of weeks ago.

1.         Take a Sad Song and Make It Better

To all of you: I promise to get off this horse soon, I do, but there is a West Coast echo in this story of dissent too.
 
California, which is also the home of Silicon Valley, passed the toughest data privacy law in the US last year. (Its prohibitions and sanctions will go into effect at the beginning of 2020.) By giving individuals a way to protect themselves from the predations of surveillance capitalism, California’s leaders expressed their dissent from the silence and/or inability of our national representatives to do the same. Disappointingly, the same non-response has come from most other state governments too. 
 
I’ve railed about data privacy repeatedly here because our personal information is being taken without our informed consent and used in ways that track us like animals (“These Tech Platforms Threaten Our Freedom”). I’ve argued that exchanging our personal data for “free” social networks like Facebook’s and “free” search engines like Google’s eliminates sources of potential income in a changing economy (“Blockchain Goes To Work”).  And I’ve at least begun to make the case that theft of our personal data undermines our personal autonomy (“Whose Values Will Save Us From Our Technology?”). There are important issues here, and outside of a few leading jurisdictions most policymakers have been neglecting them.
 
“Leading jurisdiction” is lawyer-speak for “being in the vanguard” or “a dissenter from the prevailing view.” These places have tired of everyone else’s silence on an issue of importance that demands attention. They have talked about the values that drove them to raise their voices, and have painted a picture that speaks to how the future will be better—or at least more manageable—than it is today with their new laws or regulations on the books. They’re holding up their end of the conversation by trying to get their fellow states and the rest of the nation engaged in it.
 
California lawmakers passed a data privacy bill in 2018 that, among other things, includes an expansive definition of what constitutes personal information, gives the state’s consumers the right to prohibit the sale of their data to third parties, and also allows them to “opt out” of sharing their personal data altogether. It’s common for a new law’s effective date to be a year or more later to allow all parties affected to prepare for its various impacts. As interesting as anything about California’s recent action in support of data privacy has been Congress’s re-action.

According to a news report today:

House Minority Leader Kevin McCarthey backed the idea of national legislation to safeguard consumer’s data privacy, adding a prominent GOP voice to the bi-partisan support in Congress for tackling how technology companies amass and use that information.
            
‘There needs to be national-level regulation, not state-by-state on what we’re going to do about privacy,’ Mr. McCarthey, a California Republican said in an interview…
 
A data privacy law passed last year in California helped spur action from both Mr McCarthey and a bipartisan group of lawmakers working on privacy legislation in the Senate.

As a result of California’s commitment and template for action, Congress is wrestling with its divisiveness and dysfunction to pass a federal data privacy law that will go into effect before California’s to avoid a patchwork of regulation. From one vantage point, it’s like how many clowns can get in the phone booth before the bell goes off. But from another, more serious perspective: where would Congress be today on data privacy if a leading jurisdiction like California had failed to act?
 
A similar dynamic is currently at play involving state laws (like California’s again) that are aimed at reducing the likely causes of climate change. The impact on state residents of actions like this are immediate and direct, but the impact doesn’t end there. According to two scholars who have studied public opinion around climate change, those who have not bothered to act are also reluctant to be left behind. This is from another recent post:

Egan and Mullin cite research that proves ‘the very strong correlation between state policy and public opinion’ and argue that states like California and New York are already influencing the national policy debate by acting alone. While the authors don’t say, I’d argue that it’s harder for fence-sitters on climate change to continue to remain uncommitted when majorities in other states are investing their tax dollars in targeted policies. Those ‘watching but not yet acting’ are also susceptible to committing more deeply if the advocate they’re listening to avoids the partisan bloodletting while persuading them with arguments that have already succeeded in these vanguard states.

When a commitment is grounded in values and acted upon (by speaking up, passing a law, taking any kind of objective step) to help realize a better future for everyone, others in the room, state or nation are more likely to be mobilized to define their own positions, to move the conversation forward, and sometimes to reach a new consensus than would never have been possible if those in the vanguard hadn’t taken a stand for their beliefs. in the first place.

An apartment building by Herzog & de Meuron in Tribeca

2.         Taking a Stand Is Like Playing Jenga

Kids love the game Jenga. Many adults do too. 
 
To play, you begin with a vertical rectangle of interlocking wooden pieces that are slotted in to create a stable structure. In each successive turn, a player attempts to remove one of the slotted pieces without destabilizing the structure and causing the remaining pieces to crash into a heap on the floor. 
 
I’m convinced that the sound explosion of crashing pieces is key to enjoyment of the game. When you lose (or win), you do so shatteringly. There is no question that what you did made a difference.
 
The Tribeca apartment building above looks like a Jenga tower after—in mid-game—removal of some of the pieces has caused others to move and jut out a bit from the sides. 
 
To harness the metaphor: the original Jenga tower is where prevailing opinion always starts. The room/community/state/nation is for something or against something. Then, in each successive turn, dissenters (along with the other players) modify the prevailing view.
 
Dissenters, leading jurisdictions, those who can’t keep their convictions to themselves are the key pieces that get removed. Every time they “make their case,” other pieces in the Jenga tower are impacted. Sometimes you can actually see their affect, because certain pieces jut out a little or a lot, their minds visibly beginning to change. Other times the change is imperceptible, but some pieces in the pile have become less stable as their original certainty has been clouded by doubt. Eventually, as the monolith begins to teeter, the moment of truth arrives and one final player’s testimony makes the original certainties dissolve.
 
Anyone’s turn can shatter the stability or inertia of the prevailing view.

Everyone’s turn affects other pieces either perceptibly or imperceptibly.

Anyone’s dissent can make the original certainties come crashing to the floor.

Anyone’s action can cause the crash that finally allows a different, better future to be built. 

Dissenting voices like these are never as lonely or futile as they seem.

And they never fall on deaf ears.

This post was adapted from my June 23, 2019 newsletter. When you subscribe, a new newsletter/post will be delivered to your inbox every Sunday morning.

Filed Under: *All Posts, Being Part of Something Bigger than Yourself, Building Your Values into Your Work, Heroes & Other Role Models Tagged With: California, California data privacy law, changing hearts, changing minds, dissent, Jenga, personal action, taking a stand, Tankman, Tianamen Square

  • « Previous Page
  • 1
  • 2
  • 3
  • 4
  • 5
  • …
  • 15
  • Next Page »

About David

David Griesing (@worklifeward) writes from Philadelphia.

Read More →

David Griesing Twitter @worklifereward

Subscribe to my Newsletter

Join all the others who have new posts, recommendations and links to explore delivered to their inboxes every week. You can read all published newsletters via the Index on the Subscribe Page.

My Forthcoming Book

WordLifeReward Book

Writings

  • *All Posts (187)
  • Being Part of Something Bigger than Yourself (86)
  • Being Proud of Your Work (27)
  • Building Your Values into Your Work (72)
  • Continuous Learning (58)
  • Daily Preparation (44)
  • Entrepreneurship (27)
  • Heroes & Other Role Models (34)
  • Introducing Yourself & Your Work (20)
  • The Op-eds (4)
  • Using Humor Effectively (12)
  • Work & Life Rewards (60)

Archives

Search this Site

Follow Me

David Griesing Twitter @worklifereward

Recent Posts

  • Embodied Knowledge That’s Grounded in the Places Where We Live & Work February 22, 2021
  • A Movie’s Gorgeous Take on Time, Place, Loss & Gain February 9, 2021
  • Who’s Winning Our Tugs-of-War Over On-Line Privacy & Autonomy? February 1, 2021
  • Digging for a Sense of Place December 6, 2020
  • The Amish Test & Tame New Technologies Before Adopting Them: We Can Learn How to Safeguard What’s Important to Us Too October 13, 2020

Navigate

  • About
    • Biography
    • Teaching and Training
  • Blog
  • Book
    • WorkLifeReward
  • Contact
  • Privacy Policy
  • Subscribe to my Newsletter
  • Terms of Use

Copyright © 2021 David Griesing. All Rights Reserved.

  • Terms of Use
  • Privacy Policy