David Griesing | Work Life Reward Author | Philadelphia

  • Blog
  • About
    • Biography
    • Teaching and Training
  • Book
    • WorkLifeReward
  • Newsletter Archive
  • Contact
You are here: Home / Archives for Smart City

Citizens Will Decide What’s Important in Smart Cities

July 8, 2019 By David Griesing Leave a Comment

The norms that dictate the acceptable use of artificial intelligence in technology are in flux. That’s partly because the AI-enabled, personal data gathering by companies like Google, Facebook and Amazon has caused a spirited debate about the right of privacy that individuals have over their personal information. With your “behavioral” data, the tech giants can target you with specific products, influence your political views, manipulate you into spending more time on their platforms, and weaken the control that you have over your own decision-making.
 
In most of the debate about the harms of these platforms thus far, our privacy rights have been poorly understood.  In fact, our anything-but-clear commitments to the integrity of our personal information have enabled these tech giants to overwhelm our initial, instinctive caution as they seduced us into believing that “free” searches, social networks or next day deliveries might be worth giving them our personal data in return. Moreover, what alternatives did we have to the exchange they were offering?

  • Where were the privacy-protecting search engines, social networks and on-line shopping hubs?
  • Moreover, once we got hooked on to these data-sucking platforms, wasn’t it already too late to “put the ketchup back in the bottle” where our private information was concerned? Don’t these companies (and the data brokers that enrich them) already have everything that they need to know about us?

Overwhelmed by the draw of  “free” services from these tech giants, we never bothered to define the scope of the privacy rights that we relinquished when we accepted their “terms of service.”  Now, several years into this brave new world of surveillance and manipulation, many feel that it’s already too late to do anything, and even if it weren’t, we are hardly willing to relinquish the advantages of these platforms when they are unavailable elsewhere. 
 
So is there really “no way out”?  
 
A rising crescendo of voices is gradually finding a way, and they are coming at it from several different directions.
 
In places like Toronto (London, Helsinki, Chicago and Barcelona) policy makers and citizens alike are defining the norms around personal data privacy at the same time that they’re grappling with the potential fallout of similar data-tracking, analyzing and decision-making technologies in smart-city initiatives.
 
Our first stop today is to eavesdrop on how these cities are grappling with both the advantages and harms of smart-city technologies, and how we’re all learning—from the host of scenarios they’re considering—why it makes sense to shield our personal data from those who seek to profit from it.  The rising debate around smart-city initiatives is giving us new perspectives on how surveillance-based technologies are likely to impact our daily lives and work. As the risks to our privacy are played out in new, easy-to-imagine contexts, more of us will become more willing to protect our personal information from those who could turn it against us in the future.
 
How and why norms change (and even explode) during civic conversations like this is a topic that Cass Sunstein explores in his new book How Change Happens. Sunstein considers the personal impacts when norms involving issues like data privacy are in flux, and the role that understanding other people’s priorities always seems to play. Some of his conclusions are also discussed below. As “dataveillance” is increasingly challenged and we contextualize our privacy interests even further, the smart-city debate is likely to usher in a more durable norm regarding data privacy while, at the same time, allowing us to realize the benefits of AI-driven technologies that can improve urban efficiency, convenience and quality of life.
 
With the growing certainty that our personal privacy rights are worth protecting, it is perhaps no coincidence that there are new companies on the horizon that promise to provide access to the on-line services we’ve come to expect without our having to pay an unacceptable price for them.  Next week, I’ll be sharing perhaps the most promising of these new business models with you as we begin to imagine a future that safeguards instead of exploits our personal information. 

1.         Smart-City Debates Are Telling Us Why Our Personal Data Needs Protecting

Over the past 6 months, I’ve talked repeatedly about smart-city technologies and one of you reached out to me this week wondering:  “What (exactly) are these new “technologies”?”  (Thanks for your question, George!).  
 
As a general matter, smart-city technologies gather and analyze information about how a city functions, while improving urban decision-making around that new information. Throughout, these data-gathering,  analyzing, and decision-making processes rely on artificial intelligence. In his recent article “What Would It Take to Help Cities Innovate Responsibly With AI?” Eddie Copeland begins by describing the many useful things that AI enables us to do in this context: 

AI can codify [a] best practice and roll it out at scale, remove human bias, enable evidence-based decision making in the field, spot patterns that humans can’t see, optimise systems too complex for humans to model, quickly digest and interpret vast quantities of data and automate demanding cognitive activities.

In other words, in a broad range of urban contexts, a smart-city system with AI capabilities can make progressively better decisions about nearly every aspect of a city’s operations by gaining an increasingly refined understanding of how its citizens use the city and are, in turn, served by its managers.
 
Of course, the potential benefits of greater or more equitable access to city services as well as their optimized delivery are enormous. Despite some of the current hew and cry, a smart-cities future does not have to resemble Big Brother. Instead, it could liberate time and money that’s currently being wasted, permitting their reinvestment into areas that produce a wider variety of benefits to citizens at every level of government.
 
Over the past weeks and months, I’ve been extolling the optimism that drove Toronto to launch its smart-cities initiative called Quayside and how its debate has entered a stormy patch more recently. Amidst the finger pointing among Google affiliate Sidewalk Labs, government leaders and civil rights advocates, Sidewalk (which is providing the AI-driven tech interface) has consistently stated that no citizen-specific data it collects will be sold, but the devil (as they say) remains in the as-yet to be disclosed details. This is from a statement the company issued in April:

Sidewalk Labs is strongly committed to the protection and privacy of urban data. In fact, we’ve been clear in our belief that decisions about the collection and use of urban data should be up to an independent data trust, which we are proposing for the Quayside project. This organization would be run by an independent third party in partnership with the government and ensure urban data is only used in ways that benefit the community, protect privacy, and spur innovation and investment. This independent body would have full oversight over Quayside. Sidewalk Labs fully supports a robust and healthy discussion regarding privacy, data ownership, and governance. But this debate must be rooted in fact, not fiction and fear-mongering.

As a result of experiences like Toronto’s (and many others, where a new technology is introduced to unsuspecting users), I argued in last week’s post for longer “public ventilation periods” to understand the risks as well as rewards before potentially transformative products are launched and actually used by the public.
 
In the meantime, other cities have also been engaging their citizens in just this kind of information-sharing and debate. Last week, a piece in the New York Times elaborated on citizen-oriented initiatives in Chicago and Barcelona after noting that:

[t]he way to create cities that everyone can traverse without fear of surveillance and exploitation is to democratize the development and control of smart city technology.

While Chicago was developing a project to install hundreds of sensors throughout the city to track air quality, traffic and temperature, it also held public meetings and released policy drafts to promote a City-wide discussion on how to protect personal privacy. According to the Times, this exchange shaped policies that reduced, among other things, the amount of footage that monitoring cameras retained. For its part, Barcelona has modified its municipal procurement contracts with smart cities technology vendors to announce its intentions up front about the public’s ownership and control of personal data.
 
Earlier this year, London and Helsinki announced a collaboration that would enable them to share “best practices and expertise” as they develop their own smart-city systems. A statement by one driver of this collaboration, Smart London, provides the rationale for a robust public exchange:

The successful application of AI in cities relies on the confidence of the citizens it serves.
 
Decisions made by city governments will often be weightier than those in the consumer sphere, and the consequences of those decisions will often have a deep impact on citizens’ lives.
 
Fundamentally, cities operate under a democratic mandate, so the use of technology in public services should operate under the same principles of accountability, transparency and citizens’ rights and safety — just as in other work we do.

To create “an ethical framework for public servants and [a] line-of-sight for the city leaders,” Smart London proposed that citizens, subject matter experts, and civic leaders should all ask and vigorously debate the answers to the following 10 questions:

  • Objective– why is the AI needed and what outcomes is it intended to enable?
  • Use– in what processes and circumstances is the AI appropriate to be used?
  • Impacts– what impacts, good and bad, could the use of AI have on people?
  • Assumptions– what assumptions is the AI based on, and what are their iterations and potential biases?
  •  Data– what data is/was the AI trained on and what are their iterations and potential biases?
  • Inputs– what new data does the AI use when making decisions?
  • Mitigation– what actions have been taken to regulate the negative impacts that could result from the AI’s limitations and potential biases?
  • Ethics– what assessment has been made of the ethics of using this AI? In other words, does the AI serve important, citizen-driven needs as we currently understand those priorities?
  • Oversight– what human judgment is needed before acting on the AI’s output and who is responsible for ensuring its proper use?
  • Evaluation– how and by what criteria will the effectiveness of the AI in this smart-city system be assessed and by whom?

As stakeholders debate these questions and answers, smart-city technologies with broad-based support will be implemented while citizens gain a greater appreciation of the privacy boundaries they are protecting.
 
Eddie Copeland, who described the advantages of smart-city technology above, also urges that steps beyond a city-wide Q&A be undertaken to increase the awareness of what’s at stake and enlist the public’s engagement in the monitoring of these systems.  He argues that democratic methods or processes need to be established to determine whether AI-related approaches are likely to solve a specific problem a city faces; that the right people need to be assembled and involved in the decision-making regarding all smart-city systems; and that this group needs to develop and apply new skills, attitudes and mind-sets to ensure that these technologies maintain their citizen-oriented focus. 
 
As I argued last week, the initial ventilation process takes a long, hard time. Moreover, it is difficult (and maybe impossible) to conduct if negotiations with the technology vendor are on-going or that vendor is “on the clock.”
 
Democracy should have the space and time to be a proactive instead of reactive whenever transformational tech-driven opportunities are presented to the public.

(AP Photo/David Goldman)

2.         A Community’s Conversation Helps Norms to Evolve, One Citizen at a Time

I started this post with the observation that many (if not most) of us initially felt that it was acceptable to trade access to our personal data if the companies that wanted it were providing platforms that offered new kinds of enjoyment or convenience. Many still think it’s an acceptable trade. But over the past several years, as privacy advocates have become more vocal, leading jurisdictions have begun to enact data-privacy laws, and Facebook has been criticized for enabling Russian interference in the 2016 election and the genocide in Myanmar, how we view this trade-off has begun to change.  
 
In a chapter of his new book How Change Happens, legal scholar Cass Sunstein argues that these kinds of widely-seen developments:

can have a crucial and even transformative signaling effect, offering people information about what others think. If people hear the signal, norms may shift, because people are influenced by what they think other people think.

Sunstein describes what happens next as an “unleashing” process where people who never formed a full-blown preference on an issue like “personal data privacy (or were simply reluctant to express it because the trade-offs for “free” platforms seemed acceptable to everybody else), now become more comfortable giving voice to their original qualms. In support, he cites a remarkable study about how a norm that gave Saudi Arabian husbands decision-making power over their wives’ work-lives suddenly began to change when actual preferences became more widely known.

In that country, there remains a custom of “guardianship,” by which husbands are allowed to have the final word on whether their wives work outside the home. The overwhelming majority of young married men are privately in favor of female labor force participation. But those men are profoundly mistaken about the social norm; they think that other, similar men do not want women to join the labor force. When researchers randomly corrected those young men’s beliefs about what other young men believed, they became far more willing to let their wives work. The result was a significant impact on what women actually did. A full four months after the intervention, the wives of men in the experiment were more likely to have applied and interviewed for a job.

When more people either speak up about their preferences or are told that others’ inclinations are similar to theirs, the prevailing norm begins to change.
 
A robust, democratic process that debates the advantages and risks of AI-driven, smart city technologies will likely have the same change-inducing effect. The prevailing norm that finds it acceptable to exchange our behavioral data for “free” tech platforms will no longer be as acceptable as it once was. The more we ask the right questions about smart-city technologies and the longer we grapple as communities with the acceptable answers, the faster the prevailing norm governing personal data privacy will evolve.  
 
Our good work of citizens is to become more knowledgeable about the issues and to champion what is important to us in dialogue with the people who live and work along side of us. More grounds for protecting our personal information are coming out of the smart-cities debate and we are already deciding where new privacy lines should be drawn around us. 

This post was adapted from my July 7, 2019 newsletter. When you subscribe, a new newsletter/post will be delivered to your inbox every Sunday morning.

Filed Under: *All Posts, Being Part of Something Bigger than Yourself, Building Your Values into Your Work, Continuous Learning Tagged With: Ai, artificial intelligence, Cass Sunstein, dataveillance, democracy, how change happens, norms, personal data brokers, personal privacy, privacy, Quayside, Sidewalk Labs, smart cities, Smart City, surveillance capitalism, Toronto, values

The Human Purpose Behind Smart Cities

March 24, 2019 By David Griesing Leave a Comment

It is human priorities that should be driving Smart City initiatives, like the ones in Toronto profiled here last week. 

Last week’s post also focused on a pioneering spirit in Toronto that many American cities and towns seem to have lost. While we entrench in the moral righteousness of our sides in the debate—including, for many, a distrust of collective governance, regulation and taxation—we drift towards an uncertain future instead of claiming one that can be built on values we actually share. 

In its King Street and Quayside initiatives, Toronto is actively experimenting with the future it wants based on its residents’ commitment to sustaining their natural environment in the face of urban life’s often toxic impacts.  They’re conducting these experiments in a relatively civil, collaborative and productive way—an urban role model for places that seem to have forgotten how to work together. Toronto’s bold experiments are also utilizing “smart” technologies in their on-going attempts to “optimize” living and working in new, experimental communities.

During a short trip this week, I got to see the leading edges of New York City’s new Hudson Yards community (spread over 28 acres with an estimated $25 billion price tag) and couldn’t help being struck by how much it catered to those seeking more luxury living, shopping and workspaces than Manhattan already affords. In other words, how much it could have been a bold experiment about new ways that all of its citizens might live and work in America’s first city for the next half-century, but how little it actually was. A hundred years ago, one of the largest immigrant migrations in history made New York City the envy of the world. With half of its current citizens being foreign-born, perhaps the next century, unfurling today, belongs to newer cities like Toronto.

Still, even with its laudable ambition, it will not be easy for Toronto and other future-facing communities to get their Smart City initiatives right, as several of you were also quick to remind me last week. Here is a complaint from a King Street merchant that one of you (thanks Josh!) found and forwarded that seems to cast what is happening in Toronto in a less favorable light than I had focused upon it:

What a wonderful story. But as with [all of] these wonderful plans some seem to be forgotten. As it appears are the actual merchants. Google certainly a big winner here. Below an excerpt written by one of the merchants:
   
‘The City of Toronto has chosen the worst time, in the worst way, in the worst season to implement the pilot project. Their goal is clearly to move people through King St., not to King St. For years King St. was a destination, now it is a thoroughfare.
 
‘The goal of the King St. Pilot project was said to be to balance three important principles: to move people more effectively on transit, to support business and economic prosperity and to improve public space. In its current form, the competing principles seem to be decidedly tilted away from the economic well-being of merchants and biases efficiency over convenience. The casual stickiness of pedestrians walking and stopping at stores, restaurants and other merchants is lost.
 
‘Additionally, the [transit authority] TTC has eliminated a number of stops along King St., forcing passengers to walk further to enter and disembark streetcars, further reducing pedestrian traffic and affecting areas businesses. The TTC appears to believe that if they didn’t have to pick up and drop off people, they could run their system more effectively.
 
‘The dubious benefits of faster street car traffic on King St. notwithstanding, the collateral damage of the increased traffic of the more than 20,000 cars the TTC alleges are displaced from King St to adjoining streets has turned Adelaide, Queen, Wellington and Front Sts. into a gridlock standstill. Anyone who has tried to navigate the area can attest that much of the time, no matter how close you are you can’t get there from here.
 
‘Along with the other merchants of King St. and the Toronto Entertainment District we ask that Mayor Tory and Toronto council to consider a simple, reasonable and cost-effective alternative. Put lights on King St. that restrict vehicle traffic during rush hours, but return King St. to its former vibrant self after 7 p.m., on weekends and statutory holidays. It’s smart, fair, reasonable and helps meet the goals of the King St. pilot project. 

Two things about this complaint seemed noteworthy. The first is how civil and constructive this criticism is in a process that hopes to “iterate” as real time impacts are assessed. It’s a tribute that Toronto’s experiments not only invite but are also receiving feedback like this. Alas, the second take-away from Josh’s comment is far more nettlesome. “[However many losers there may be along the way:] Google certainly a big winner here.”

The tech giant’s partnership with Canada’s governments in Toronto raises a constellation of challenging issues, but it’s useful to recall that pioneers who dare to claim new frontiers always do so with the best technology that’s available. While the settling of the American West involved significant collateral damage (to Native Americans and Chinese migrants, to the buffalo and the land itself), it would not have been possible without existing innovations and new ones that these pioneers fashioned along the way. Think of the railroads, the telegraph poles, even something as low-tech as the barbed wire that was used to contain livestock. 

The problem isn’t human and corporate greed or heartless technology—we know about them already—but failing to recognize and reduce their harmful impacts before it is too late. The objective for pioneers on new frontiers should always be maximizing the benefits while minimizing the harms that can be foreseen from the very beginning instead of looking back with anger after the damage is done.

We have that opportunity with Smart City initiatives today.

Because they concentrate many of the choices that will have to be made when we boldly dare to claim the future of America again, I’ve been looking for a roadmap through the moral thicket in the books and articles that are being written about these initiatives today. Here are some of the markers that I’ve discovered.

Human priorities, realized with the help of technology

1.         Markers on the Road to Smarter and More Vibrant Communities

The following insights come almost entirely from a short article by Robert Kitchin, a professor at Maynooth University in Ireland. In my review of the on-going conversation about Smart Cities, I found him to be one of its most helpful observers.  

In his article, Kitchin discusses the three principal ways that smart cities are understood, the key promises smart initiatives make to stakeholders, and the perils to be avoided around these promises.

Perhaps not surprisingly, people envision cities and other communities “getting smarter” in different ways. One constituency sees an opportunity to improve both “urban regulation and governance through instrumentation and data-driven systems”–essentially, a management tool. A bolder and more transformative vision sees information and communication technology “re-configur[ing] human capital, creativity, innovation, education, sustainability, and management,” thereby “produc[ing] smarter citizens, workers and public servants” who “can enact polic[ies], produce better products… foster indigenous entrepreneurship and attract inward investment.” The first makes the frontier operate more efficiently while the second improves nearly every corner of it.

The third Smart City vision is “a counter-weight or alternative” to each of them. It wants these technologies “to promote a citizen-centric model of development that fosters social innovation and social justice, civic engagement and hactivism, and transparent and accountable governance.” In this model, technology serves social objectives like greater equality and fairness. Kitchin reminds us that these three visions are not mutually exclusive. It seems to me that the priorities embedded in a community’s vision of a “smarter” future could include elements of each of them, functioning like checks and balances, in tension with one another. 

Smart City initiatives promise to solve pressing urban problems, including poor economic performance; government dysfunction; constrained mobility; environmental degradation; a declining quality of life, including risks to safety and security; and a disengaged, unproductive citizen base. Writes Kitchin:

the smart city promises to solve a fundamental conundrum of cities – how to reduce costs and create economic growth and resilience at the same time as producing sustainability and improving services, participation and quality of life – and to do so in commonsensical, pragmatic, neutral and apolitical ways.

Once again, it’s a delicate balancing act with a range of countervailing interests and constituencies, as you can see in the chart from a related discussion above.
 
The perils of Smart Cities should never overwhelm their promise in my view, but urban pioneers should always have them in mind (from planning through implementation) because some perils only manifest themselves over time. According to Kitchin, the seven dangers in pursuing these initiatives include:
 
–taking “a ‘one size fits all’ approach, treating cities as generic markets and solutions [that are] straightforwardly scalable and movable”;
 
–assuming that initiatives are “objective and non-ideological, grounded in either science or commonsense.” You can aim for these ideals, but human and organizational preferences and biases will always be embedded within them.
 
–believing that the complex social problems in communities can be reduced to “neatly defined technical problems” that smart technology can also solve. The ways that citizens have always framed and resolved their community problems cannot be automated so easily. (This is also the thrust of Ben Green’s Smart Enough City: Putting Technology in Its Place to Reclaim Our Urban Future, which will be published by MIT Press in April. In it he argues for “smart enough alternatives” that are attainable with the help of technology but never reducible to technology solutions alone.)
 
–engaging with corporations that are using smart city technologies “to capture government functions as new market opportunities.” One risk of a company like Google to communities like Toronto’s is that Google might lock Toronto in to its proprietary technologies and vendors over a long period of time or use Toronto’s citizen data to gain business opportunities in other cities.
 
–becoming straddled with “buggy, brittle and hackable” systems that are ever more “complicated, interconnected and dependent on software” while becoming more resistant to manual fixes.
 
–becoming victimized by “pervasive dataveillance that erodes privacy” through practices like “algorithmic social sorting (whether people get a loan, a tenancy, a job, etc), dynamic pricing (whereby different people pay varying prices depending on their perceived customer value) and anticipatory governance using predictive profiling (wherein data precedes how a person is policed and governed).” Earlier this month, my post on popular on-line games like Fortnite highlighted the additional risk that invasive technologies can use the data they are gathering to change peoples’ behavior.
 
-and lastly, reinforcing existing power structures and inequalities instead of eroding or reconfiguring them.
 
While acknowledging the promise of Smart Cities at their best, Kitchin closes his article with this cautionary note:

the realities of implementation are messier and more complex than the marketing hype of corporations or city managers portray and there are a number of social, political, ethical and legal concerns with respect to the kind of society smart city initiatives seek to create.  As such, whilst networked urbanism has benefits, it also poses challenges and risks that are often little explored or legislated for ahead of implementation. Indeed, the pace of development and rollout of smart city technologies is proceeding well ahead of wider reflection, critique and regulation.

Putting the cart before a suitably-designed horse is a problem with all new and seductive technologies that get embraced before their harms are identified or can be addressed—a quandary that was also considered here in a post called “Looking Out for the Human Side of Technology.”

2.         The Value of Our Data

A few additional considerations about the Smart City are also worth bearing in mind as debate about these initiatives intensifies.

In a March 8, 2019 post, Kurtis McBride wrote about two different ways “to value” the data that these initiatives will produce, and his distinction is an important one. It’s a discussion that citizens, government officials and tech companies should be having, but unfortunately are not having as much as they need to.

When Smart City data is free to everyone, there is the risk that the multinationals generating it will merely use it to increase their power and profits in the growing market for Smart City technologies and services. From the residents’ perspective, McBride argues that it’s “reasonable for citizens to expect to see benefit” from their data, while noting that these same citizens will also be paying dearly for smart upgrades to their communities. His proposal on valuing citizen data depends on how it will be used by tech companies like Google or local service providers. For example, if citizen data is used:

to map the safest and fastest routes for cyclists across the city and offers that information free to all citizens, [the tech company] is providing citizen benefit and should be able to access the needed smart city data free of charge. 
 
But, if a courier company uses real-time traffic data to optimize their routes, improving their productivity and profit margins – there is no broad citizen benefit. In those cases, I think it’s fair to ask those organizations to pay to access the needed city data, providing a revenue stream cities can then use to improve city services for all. 

Applying McBride’s reasoning, an impartial body in a city like Toronto would need to decide whether Google has to pay for data generated in its Quayside community by consulting a benefit-to-citizens standard. Clearly, if Google wanted to use Quayside data in a Smart City initiative in say Colorado or California, it would need to pay Toronto for the use of its citizens’ information.
 
Of course, addressing the imbalance between those (like us) who provide the data and the tech companies that use it to increase their profits and influence is not just a problem for Smart City initiatives, and changing the “value proposition” around our data is surely part of the solution. In her new book Age of Surveillance Capitalism: the Fight for a Human Future in the New Frontier of Power, Harvard Business School’s Shoshana Zuboff says that “you’re the product if these companies aren’t paying you for your data” does not state the case powerfully enough. She argues that the big tech platforms are like elephant poachers and our personal data like those elephants’ ivory tusks. “You are not the product,” she writes. “You are the abandoned carcass.”
 
Smart City initiatives also provide a way to think about “the value of our data” in the context of our living and working and not merely as the gateway to more convenient shopping, more addictive gaming experiences or  “free” search engines like Googles’.

This post is adapted from my March 24, 2019 newsletter. Subscribe today and receive an email copy of future posts in your inbox each week.

Filed Under: *All Posts, Being Part of Something Bigger than Yourself, Building Your Values into Your Work, Entrepreneurship, Work & Life Rewards Tagged With: entrepreneurship, ethics, frontier, future of cities, future of work, Google, Hudson Yards, innovation, King Street, pioneer, priorities, Quayside, Robert Kitchin, smart cities, Smart City, smart city initiatives, technology, Toronto, urban planning, value of personal data, values

About David

David Griesing (@worklifeward) writes from Philadelphia.

Read More →

Subscribe to my Newsletter

Join all the others who have new posts, recommendations and links to explore delivered to their inboxes every week. Please subscribe below.

David Griesing Twitter @worklifereward

My Forthcoming Book

WordLifeReward Book

Search this Site

Recent Posts

  • Liberating Trump’s Good Instincts From the Rest April 21, 2025
  • Delivering the American Dream More Reliably March 30, 2025
  • A Place That Looks Death in the Face, and Keeps Living March 1, 2025
  • Too Many Boys & Men Failing to Launch February 19, 2025
  • We Can Do Better Than Survive the Next Four Years January 24, 2025

Follow Me

David Griesing Twitter @worklifereward

Copyright © 2025 David Griesing. All Rights Reserved.

  • Terms of Use
  • Privacy Policy