David Griesing | Work Life Reward Author | Philadelphia

  • Blog
  • About
    • Biography
    • Teaching and Training
  • Book
    • WorkLifeReward
  • Newsletter Archive
  • Contact
You are here: Home / Archives for Ai

Using AI to Help Produce Independent, Creative & Resilient Adults in the Classroom

September 10, 2025 By David Griesing Leave a Comment

I learned something that made me smile this week. 

An innovator at the leading edge of American education and technology (or “ed-tech”) named Steve Hargadon picked up on a thesis I’d advanced some time ago in “The Amish Test & Tame New Technologies Before Adopting Them & We Can Learn How to Safeguard What’s Important to Us Too” and applied it to the use of AI in our classrooms today.

For both better and worse, we’ve let marketers like Google (in search), Facebook (in social media), and Apple (in smart phones) decide how we integrate their products into our lives—usually by dropping them on top of us and letting each new user “figure it out.”

But instead of being left with the hidden costs to our privacy, attention and mental health, we could decide how to maximize their benefits and limit their likely harms before we get hooked on these products, the types of assessments that groups like the Amish always undertake —as another innovator in this space (Wired co-founder Kevin Kelly) noted several years ago.

To further the discussion about our use of technology generally and ed-tech in particular, I’ll briefly review the conclusions in my Test & Tame post and summarize a more recent one (“Will AI Make Us Think Less or Think Better”), before considering Hargadon’s spot-on proposals in “Intentional Education with AI: The Amish Test and Generative Teaching.”

The traditional Pennsylvania Amish work their farms and small businesses at a surprisingly short distance from Philadelphia. When I venture out of town for an outing it’s often to Lancaster County, where the car I’m in is quickly cheek-to-jowl with a horse and buggy, and families hang their freshly washed clothes on lines extending from back doors instead of cramming them into drying machines. It’s hard to miss their strikingly different take on the “modern conveniences” that benefit as well as burden the rest of us. What Kelly and others pointed out was that the Amish manage their use of marvels like cars or dryers as a community, instead of as individuals.

I described the difference this way:

As consumers, we feel entitled to make decisions about tech adoption on our own, not wishing to be told by anybody that ‘we can buy this but can’t buy that,’ let alone by authorities in our communities who are supposedly keeping ‘what’s good for us’ in mind. Not only do we reject a gatekeeper between us and our ‘Buy’ buttons, there is also no Consumer Reports that assesses the potential harms of [new] technologies to our autonomy as decision-makers, our privacy as individuals, or our democratic way of life — no resource that warns us ‘to hold off’ until we can weigh the long-term risks against the short-term rewards. As a result, we defend our unfettered freedom until we start discovering just how terrible our freedom can be.

By contrast, the Amish hold elaborate “courtship rituals” with a new technology before deciding to embrace some or all of its features—for example sharing use of the internet in a device that all can access when its needed INSTEAD OF owning your personal access via a smart phone you keep in your pocket. They reach a consensus like this from extensive testing of smart phone use & social media access within their community, appreciating over time its risks in terms of “paying attention” generally, or “self-esteem among the young” more particularly, before a gatekeeper (like a bishop) decides what, if any, accommodation with these innovations seems “good” for all.

The community’s most important values are key to arriving at this “testing & taming” consensus. The Amish openly question whether the next innovation will strengthen family and community bonds or cause users to abandon them. They wonder about things as “small & local” as whether a new technology will enable them to continue to have every meal of the day with their families, which is important to them. And they ask whether a phone or social media platform will increase or decrease the quality of family time together, perhaps their highest priority. The Amish make tech use conform to their values, or they take a pass on its use altogether. As a result,  

the Amish are never going to wake up one day and discover that a generation of their teenagers has become addicted to video games; that smartphones have reduced everyone’s attention span to the next externally-generated prompt; or that surveillance capitalism has ‘suddenly’ reduced their ability to make decisions for themselves as citizens, shoppers, parents or young people.

When I considered Artificial Intelligence’s impacts on learning last month, I didn’t filter the pros & cons through any community’s moral lens, as in: what would most Americans say is good for their child to learn and how does AI advance or frustrate those priorities? Instead, I merely juxtaposed one of the primary concerns about AI-driven chatbots in the classroom with one of their most promising up-sides. On the one hand, when an AI tool like ChatGPT effectively replaces a kid’s thinking with its own, that kid’s ability to process information and think critically quickly begins to atrophy. On the other hand, when resource-rich AI tutors are tailored to students’ particular learning styles, we’re discovering that these students “learned more than twice as much as when they engaged with the same content during [a] lecture…with personalized pacing being a key driver of success.”

We’re also realizing that giving students greater control over their learning experience through “personalized on-demand design” has:

allowed them to ask as many questions as they wished and address their personal points of confusion in a short period of time. Self-pacing meant that students could spend more time on concepts they found challenging and move quickly through material they understood, leading to more efficient learning….

Early experience with Ai-tutors has also changed the teacher’s role in the classroom. While individualized tutoring by chat-bots will liberate teachers to spend more time motivating students and being supportive when frustrations arise, 

our educators will also need to supervise, tweak and even design new tutorials. Like the algorithms that adapt while absorbing new data, they will need to continuously modify their interventions to meet the need of their students and maximize the educational benefits.

Admittedly, whether America’s teachers can evolve into supervisors and coaches of AI-driven learning in their classrooms—to in some ways, become “even smarter than [these] machines”— is a question “that will only be answered over time.”

Meanwhile, Steve Hargadon asks an even more fundamental question in his recent essay. Like the Amish, he wonders:

What is our most important priority for American students today, and how can these new, AI capabilities help us to produce the adutls that we want and that an evolving American community demands?

In what I call his “foggy window paintings,” photographer Jochen Muhlenbrink
finds the clarity through the condensation (here and above). I borrowed his inspiration
in a photo that I took of our back door one night a few years back (below).

Hargadon begins by acknowledging a lack of consensus in the American educational community, which startled me initially but which I (sadly) came to realize is all-too-true.

Unlike most Amish communities, American education is “a community of educators, students and stake-holders” in only the loosest sense. It’s also an old community, set in its ways, particularly when it comes to public education (or “the educating” that our tax dollars pay for). Writes Hargadon:

Here’s an uncomfortable truth: traditional schooling, despite promises of liberating young minds, has always excelled more at training compliance than fostering independent thinking. While we often claim otherwise, it’s largely designed to create standardized workers, not creative thinkers.

Unless we acknowledge this reality, we’ll miss what’s really at stake with AI adoption. Unexamined AI use in an unexamined education system will amplify these existing flaws, producing students who are even less self-directed and capable. The temptation for quick AI-generated answers, rather than wrestling with complex problems, threatens the very traits we want in our future adults: curiosity, agency, and resilience. (emphasis added)

If we examine the American education system and consider it as a kind of community, it quickly becomes apparent that it’s a much more diverse and divided in terms of its priorities than the Amish.

Moreover, because non-Amish Americans often seem to love their individual freedoms (including choosing “what’s good” for their children), more than the commitments they share (or what’s best for all), the American educational community has often seemed reluctant, if not resistant, to accepting the guidance or governance of a gate-keeper in their classrooms.

So while some of us prefer tech-tools that get students to the right answers (or at least the answers we’ll test for later), others prefer fostering a vigorous thinking process wherever it might lead. 

Hargadon, along with me and the AI-tutor developers I wrote about in July clearly prefer what he calls “generative teaching,” or building the curious and resilient free-agents that we want our future adults to be. So let’s assume that we can gather the necessary consensus around this approach—if not for the flourishing of our children generally, but because an increasingly automated job market demands curiosity, resilience and agency for the jobs that will remain. Then “the Amish test” can be put into practice when evaluating AI-tools in the classroom.

Instead of asking: Will this make teaching easier [for the teachers]?
Ask: Will this help students become more creative, self-directed, and capable of independent thought?

Instead of asking: Does this improve test scores?
Ask: Does this foster the character traits and thinking skills our students will need as adults?

With their priorities clear, parents and students (along with American education’s many other stakeholders) would now have a “test” or “standard” with which to judge AI-driven technologies. Do they further what we value most, or divert us from a goal that we finally share?

To this dynamic, Hargadon adds a critical insight. While I mentioned the evolving role of today’s teachers in the use of these tools, he proposes “teaching the framework” to students as well. 

Help students apply their own Amish Test to AI tools. This metacognitive skill—thinking about how they think and learn—may be more valuable than any specific technology…

[By doing so,] students learn to direct technology rather than be directed by it. They develop the discernment to ask: ‘How can I use this tool to become a better thinker, not just get faster answers?

When this aptitude finally becomes engrained in our nation’s classrooms, it may (at last) enable Americans to decide what is most important to us as a country—the commitments that bind us to one another, and not just the freedoms that we share—so we can finally start testing & taming our next transformational technology on how it might unify the American people instead of divide us.

For the past 4 months, I’ve been reporting on the state of American democracy’s checks & balances because nothing should be more important to our work as citizens than the continuing resilience of our democratic institutions. And while I assumed there might be some “wind-down” in executive orders and other actions by the Trump White House in the last few weeks of August, the onslaught in areas big & small continued to challenge our ability to respond to each of them in any kind of thoughtful way.

Other than mentioning this week’s bombing of an unidentified vessel in the the Gulf of Mexico; threat of troops to Chicago, Baltimore and New Orleans; turmoil at the Centers for Disease Control; immigration raid on a massive EV plant in Georgia; more urging that Gaza should be turned into the next Riviera; the president’s design of a new White House ballroom; and Vladimir Putin’s repudiation of America’s most recent deadline on Ukraine, Trump’s leadership today faces 2 crossroads that may be even more worthy of your consideration.

At the Department of Labor in Washington D.C. this week

1.    What we’re seeing & hearing is either a fantasy or a preview.

In a subscriber newsletter from the New York Times this week, columnist Jamelle Bouie writes:

The administration-produced imagery in Washington is… a projection of sorts — a representation of what the president wants reality to be, drawn from its idea of what authoritarianism looks like. The banners and the troops — not to mention the strangely sycophantic cabinet meetings and news conferences — are a secondhand reproduction of the strongman aesthetic of other strongman states. It is as if the administration is building a simulacrum of authoritarianism, albeit one meant to bring the real thing into being. No, the United States is not a totalitarian state led by a sovereign Donald Trump — a continental Trump Organization backed by the world’s largest nuclear arsenal — but his favored imagery reflects his desire to live in this fantasy.

The spectacle that falsifies reality is nevertheless a real product of that reality, while lived reality is materially invaded by the contemplation of the spectacle and ends up absorbing it and aligning itself with it,’ the French social theorist Guy Debord wrote in his 1967 treatise ‘The Society of the Spectacle,’ a work that feels especially relevant in an age in which mass politics is as much a contest to construct meaning as it is to decide the distribution of material goods.

2.    Trump seems to be dealing with everything but “pocketbook issues”—or (in James Carville’s famous words during the 1992 presidential election), “It’s the economy, stupid.”

This week, the Wall Street Journal reported that after trending up in June and July, consumer sentiment dropped nearly 6% in August according to the University of Michigan’s “closely watched” economic sentiment survey. “More U.S. consumers now say they’re dialing down spending than when inflation spiked in 2022,” the article says. “Over 70% of people surveyed from May to July plan to tighten their budgets for items with large price increases in the year ahead….”

In a rejoinder, columnist Karl Rove mentioned a new WSJ/National Opinion Research Center poll that shows “voters are sour about their circumstances and pessimistic about the future.” As we head into the fall and towards the mid-terms next year, Rove opines: “It’ll take a lot more than happy talk” to counter these impressions. “People must see positive results when they shop, fuel up their cars, deposit paychecks and glance at their retirement accounts.”

As of this week, there is no sign that any plan for economic stability or growth is on the horizon, forecasting even more contentious, unsettling & expensive times ahead.

This post was adapted from my September 7, 2025 newsletter. Newsletters are delivered to subscribers’ in-boxes every Sunday morning, and sometimes I post the content from one of them here, in lightly edited form. You can subscribe by leaving your email address in the column to the right.

Filed Under: *All Posts Tagged With: Ai, AI-tutor, Amish, Amish Test & Tame New Technologies, artificial intelligence, chatbots, ChatGPT, Kevin Kelly, Steve Hargadon

Great Design Invites Delight, Awe

June 4, 2025 By David Griesing Leave a Comment

I replaced my iPhone this week. With my updated MacBook, they’re my two most essential devices.  I realized that I interact with them more than with anyone else most days (except maybe Wally). 

I’m startled to write this. 

The new phone brought renewed wonderment (even awe) at what a personal device like this can do, how they respond to my touch, and how indispensable they’ve become. It’s almost as if they should have names. Almost.

Steve Jobs and his lead designer at Apple, Jony Ives, wanted to create elegantly companionable objects that we’d want to touch, look at and listen to. 

My current laptop still has that original, brushed aluminum skin that feels soft to my fingertips. One of the new, suggested screen-savers on my new phone features a low earth satellite image of my corner of the globe changing from the blues of daylight to the light specs of dusk as the day unfolds into night. Details like these add up. 

So I was more than intrigued to learn this week that the same Jony Ives who’d imagined these devices into existence with Jobs has just joined up with the man who might be Jobs’ tech-world successor to bring us what they hope to be our Third Core Device (alongside the MacBook and iPhone). 

In a video announcement that is pretty relatable in its own right, Sam Altman (the progenitor of the AI-driven ChatGPT) and Ives tell us the story of how, over the past few years, they decided to combine their resources to create and market a pocket-size, screen-free, non-wearable device that–through the mustering of  its AI capabilities–could become our first “contextually aware” desk-top device.  

So while I’ll eventually get to my weekly update of Family Strongman developments (including a warning about the next phase of the current regime)—it was a great relief this week to imagine with Altman and Ives how they (as opposed to Trump) might also be seizing part of my future before too many more months have passed. 

So back to the Jony & Sam origin-story video that entered so many of our feeds in recent days. 

As is evident from the photo of Ives and Altman up-top, there seems to be some chemistry here, two “creative visualizers” sharing similar wave lengths, not unlike when Ives and Jobs were imagineering the iPhone, and both of them could “just about see it.”

(I did a visuali collaboration like this when I was just starting out. I had a play space for kids in mind and wanted to show others what it might look like because words could only take me and my listeners so far. Working with a collaborator “to see what I was seeing” was a powerful, deeply personal experience, and ultimately a highly practical one.  Over several weeks, my visualizer made beautiful color drawings of what he’d heard me describe—and they became welcomed accompaniments for my show & tells when I took my business plan on the road. They also pointed towards what my imagined space might have looked like if we had been even more in sync.)

During their recent announcement, Ives and Altman seemed to be sharing the kind of strangely intimate, “never-seen-before (except by the two of us)” mind space that I’d gotten only a glimpse of. 

In a way, their story begins with Jobs passing and continues with Ives leaving diverse roles Apple six years ago to begin a design-work-experiment in the Jackson Square section of San Francisco. Together with Australian industrial designer Marc Newson, both hired architects, graphic designers, writers and a cinematic special effects developer, all of whom were invited to work across three areas: work for the love of it (which they did without pay); work for some high-end clients (which paid the rent) ; and work for themselves (which included renovating a block’s worth of historical buildings into their new home).

Two years ago, Charlie (one of Ives’ 21-year-old twin sons) told him about ChatGPT. Following his son’s excitement over the chatbot to its source, Ives connected with Altman and so began Ives’ next partnership. In the video—which is at once promotional, anticipatory and destiny-drenched—Ives describes Altman, a longtime entrepreneur and founder of OpenAI, as “a rare visionary” who “shoulders incredible responsibility, but his curiosity, his humility remain utterly inspiring.”  Altman describes Ives as “the deepest thinker of anyone I have ever met. What that leads him to be able to come up with is unmatched.” In a WSJ article about their new partnership, Ives is quoted as saying: “The way that we clicked, and the way that we’ve been able to work together, has been profound for me.” 

Their brotherhood launched, and eighteen months ago Altman sent OpenAI’s chief product developer to work with Ives’ team of “subject matter experts” in hardware and software engineering, physics, and product manufacturing. Last fall, the two sides became excited about a specific device they were fabricating, and Altman started taking prototypes home to use. He told the Journal: that their goal is to release a new AI-driven device (that is, tens of millions of units) to the public by late next year. 

Of course, their even larger goal is to realize more of the promise of human-directed artificial intelligence than has been realized to date. As Altman says in the video:

“[With AI] I’m [already] two or three times more productive as a scientist that I was before. I’m two or three times faster to find a cure for cancer than I was before, because I have this incredible external brain.” [emphasis added]

And despite regular cautions about the future of AI—including here, in “Will We Domesticate AI in Time?”—Ives speaks with almost childlike delight and wonder about what this new device might start bringing us at the video’s conclusion.

“I think this will be one of these moments of just an absolute embarrassment of riches, of what people go create for collective society. I am absolutely certain that we are literally on the brink of a new generation of technology that can make us our better selves.”

It’s a beautifully expansive thought for a sadly claustrophobic time. 

Of course there’s more to this partnership and product announcement than the principals’ affection for each other and an eagerness to improve our lives with a new “family” of “AI companions” that can complement our phones, laptops and other screened devices.

But some of the story is also about their regrets and dissatisfactions with the tech-driven choices that we have today. For example, in the New York Times’ coverage of their new collaboration, Ives admits: “I shoulder a lot of the responsibility for what these things have brought us,” referring to the anxiety and distraction that come with being constantly connected to the computer-phone that he, almost as much as anyone, put in all of our pockets.

For his part, Altman focused on the information overload that technology without a gate-keeper assaults us with these days. 

I don’t feel good about my relationship with technology right now. It feels a lot like being jostled on a crowded street in New York, or being bombarded with notifications and flashing lights in Las Vegas.

He told the Times that his goal was to leverage A.I. to help people make “better sense of the noise.”

Altman and Ives will also have to navigate a highly competitive environment if they’re to succeed. Facebook parent Meta, Google parent Alphabet, Snap and Apple all have their own approaches to AI devices in development. Farthest along in development seem to be wearable glassses.  An article in Barrons at the time of the announcement mentions a now-defunct company that was started by two ex-Apple employees and where Altman had also been an investor to make a non-wearable device.  Unable to overcome the technical challenges, the so-called Humane AI Pin was quickly undermined by performance glitches and scathing tech reviews.

Barrons also brought a whiff of high-brow dabbling to Ives own track record since he left Apple in 2019:

All of [it]’s work has been in the ultra-luxury class. So far, its product designs are a jacket that starts at $2,000, a $60,000 limited edition turntable, some work for Ferrari…a logo and emblem for King Charles… and a partnership with Airbnb.

Nevertheless, if Altman and Ives revolutionary device succeeds, it could change the way that AI has been delivered and experienced by early adopters of chatbots over the last two years. As the WSJ noted: 

While Apple and Google have struggled to keep pace with AI innovations, many investors see the two companies—whose software runs nearly all the world’s smartphones—as the primary means through which billions of people will access AI tools and chatbots. Building a device is the only way OpenAI and other artificial-intelligence companies will be able to interact with consumers directly.

In other words, Altman and Ives could effectively by-pass both Apple and Google with their new device, and all of the recent press reports provide tantalizing clues about how it will deliver—for both better and worse.

Barrons reports that their new device will likely be cloud-based. Despite that advantage, it will also stop working when off-network and may be marred with “latency” (or delays in transmission) even when users are connected.

More groundbreakingly, both the Times and the Journal suggest that there will be something like sentience in the new device. 

The Times authors say Altman and Ives could spur what is known as “ambient computing” or a technology that makes digital interactions both natural and unobtrusive, as when a device in the background adapts to your needs without needing your commands. “Rather than typing and taking photographs on smartphones,” they note, such devices “could process the world in real time, fielding questions and analyzing images and sounds in seamless ways.”

Meanwhile, the WSJ teases (or frightens) us with the possibility that “[t]he product will be capable of being fully aware of a user’s surroundings and life, will be unobtrusive, able to rest in one’s pocket or on one’s desk”—in other words, more like a helpful companion or alter-ego than anything Silicon Valley has delivered to us before. 

At least for now, I’m choosing to see these possibilities as closer to magical than to anything else. 

A White House door-knob

This week’s Family Strongman links, images and comments include more on the president’s greed and self-dealing, which regularly feed the gilded image that he has of himself. 

They include moments that would be purely laughable if they did not also illustrate how Trump’s “art of the deal” is undermining America’s ability to negotiate in our country’s best interests with rivals like China and Russia. 

And I’ll leave you this week with some haunting observations by NYT columnist M Gessen (who was born in the Soviet Union) on how difficult it can be to maintain the necessary level of public outrage and push-back to the cumulative actions of any authoritarian regime.

Here are this week’s Trump-related comments, links and images to help keep you on the frontlines:

1.    In Vanity Fair, right before the 2016 election, it’s good to recall that fellow New Yorker Fran Lebowitz called Mr. Trump “a poor person’s idea of a rich person.”

2.    “As the Trumps Monetize the Presidency, Profits Outstrip Protest” includes New York Times reporter Peter Baker’s take on at least two of the reasons why the president and his family have faced less resistance and outrage than they normally would have–and each is Trump-engineered. 

Mr. Trump, the first convicted felon elected president, has erased ethical boundaries and dismantled the instruments of accountability that constrained his predecessors. There will be no official investigations because Mr. Trump has made sure of it. He has fired government inspectors general and ethics watchdogs, installed partisan loyalists to run the Justice Department, F.B.I. and regulatory agencies and dominated a Republican-controlled Congress unwilling to hold hearings.

As a result, while Democrats and other critics of Mr. Trump are increasingly trying to focus attention on the president’s activities, they have had a hard time gaining any traction without the usual mechanisms of official review. And at a time when Mr. Trump provokes a major news story every day or even every hour — more tariffs on allies, more retribution against enemies, more defiance of court orders — rarely does a single action stay in the headlines long enough to shape the national conversation.

3.    After Trump repeatedly reverses course on tariff threats, the TACO meme—Trump Always Chickens Out—went viral on social media this week. While it’s the kind of humor that punctures his ego nicely by calling his staying-power into question, it’s also how he’s being viewed as a weakling in international disputes with adversaries like China and Russia that have significant consequences for American’s pocketbooks, at a minimum, and our national security, more broadly.   

4.    In “Beware: We are Entering a New Stage in the Trump Era,”  A few days ago, M Gessen wrote about (a) the cruelties that have taken place in America over the last 5 months, such as the “gutting constitutional rights and civil protections,” chainsawing federal jobs without a efficiency plan, “brutal deportations,” “people snatched off the streets and disappeared in unmarked cars,” attacks on universities and law firms; (b) how we’re wired as humans to eventually resign ourselves to harsh “realities” that we feel helpless to change; and (c) the consequences for our country when too many of us are “normalizing” Trump’s regime at a time when resistance to it is needed the most. In other words, Gessen fears that we’re almost becoming enervated by our opposition and alarm, and how dangerous that can be in a democracy that requires a certain level of citizen engagement. 

For my part, I’m trying to both maintain “the threat level” and focus on “the next turn of the regime” by occasionally stepping down from the barricades, like when I’m feeling wonder and even awe at the possibility of a new, transformational and maybe even more humane devices that could enrich my life and work in ways I can barely imagine. 

This post was adapted from my June 1, 2025 newsletter. Newsletters are delivered to subscribers’ in-boxes every Sunday morning, and sometimes I post the content from one of them here, in lightly edited form. You can subscribe by leaving your email address in the column to the right.

Filed Under: *All Posts Tagged With: Ai, artificial intelligence, creative visualizing, design, engineering, humane technology, industrial design, Io, Jony Ives, OpenAI, Sam Altman, screenless handheld AI powered device, tech optimism

Delivering the American Dream More Reliably

March 30, 2025 By David Griesing Leave a Comment

We hear a lot about the dangers and “hallucinations” of AI as we test-drive our large language models. At the same time, we probably hear too little about how AI is helping us to advance our body of knowledge by processing huge volumes of data in previously unimagined ways. The benefits don’t always outweigh the risks, but sometimes they do—and in an unprecedented fashion.

I’m thinking today about how AI-driven assessments are starting to tell us whether social policy “fixes” that we implement today are actually achieving their intended results instead of speculating about their possible “pay offs” 10 or 20 years later. These new assessments can help us to determine “the returns on our investments” when we attempt to improve our society by (say) providing paternity leave for fathers, multiplying our social connections, or enhancing the stock of affordable housing in vibrant communities.

Artificial intelligence is already enabling us to identify and refine the variables for public policy success beforehand and to keep track of the resulting benefits in something that approaches real time. 

I’ve written here several times about how too many of us are failing to achieve the American Dream. Straightforwardly, that’s whether our economy is affording our nation’s children the opportunity to do better economically than their parents over succeeding generations. For Nobel Laureate Edmund Phelps, attaining that American Dream provides a “flourishing” that brings both us and our children psychic benefits (like pride and enhanced self-esteem) as well as greater prosperity and foreward momentum. We calculate the likelihood of its returns in measures of opportunity—from having many opportunites to improve ourselves to having almost none available at all.

Over the past 90 years, for millions in the U.S. (or nearly everyone whose family is not in the top 20 percent income-wise) the quest to attain the American Dream has been disappointing at best, soul-crushing at worst. Our inablity to reliably improve either our fortunes or those of the generations that succeed us unleashes a cascade of unfortunate consequences, such as widespread pessimism about the future, cynicism about our politics, an ever-widening gulf between economic “winners” and “losers,” a rise in “deaths of despair,” and a willingness to gamble on a leader who promises “a new golden age” but never reveals how anyone “who hasn’t gotten there already” will be able to reach it. 

This is a chart, from a presentation at the Milken Institute by economist and Harvard professor Raj Chetty, includes income measures from tax returns for both parents and their children (both at age 30). Using AI tools, it tracks “the percent of children earning more than their parents” through the mid-1980’s. (The overall percentage has not improved between then and now.) Today, like in 1985, it is “essentially a 50/50 coin flip as to whether you are going to achieve the American Dream.” (A link that will enable a closer view of this chart, along with others included here, is provided below.)

During the New Deal of the 1930s and Great Society of the 1960s, a raft of social programs was launched to give Americans “who worked hard and were willing to sacrifice for the sake of better tomorrows” greater opportunities to improve their circumstances and live to enjoy “the even greater success” of their children. Unfortunately, our prior attempts to engineer the “economic playing field” so that it delivers the American Dream more reliably have often been little better than “shots in the dark.”

For instance, many New Deal initiatives didn’t succeed until the economic engines of the Second World War kicked in. The “anti-poverty” programs of the Great Society bore fruit in some areas (such as voters’ rights) while causing unexpected consequences in others, like the weakening of low-income families when welfare checks effectively “replaced” fathers’ traditional roles as breadwinners. In those days, policymakers meant well but lacked the assessment tools to know whether their fixes were working until 10 or 20 years out, when they’d sometimes discover that the original problem persisted, or the collateral damage from the policy itself became evident.

Today, new policy-making tools are eliminating much of this guess-work. AI-driven data gathering, experimentation within different communities, and almost “real-time” assessments of progress have begun to transform the ways that new economic policies are developed and implemented.  Raj Chetty, the teacher and economist pictured here, is at the forefront of this sea change.

I’m profiling his work today because of the results he, his team and his fellow-travelers in this big-data-driven space are beginning to achieve. But this work also injects a note of optimism into an increasingly pessimistic time. Policy delivery like Chetty’s points towards a future with greater economic promise than the majority of us can see today–when inflation persists, tariffs threaten even higher prices, and government safety nets are dismantled without apparent gains in efficiency. What Chetty calls his “Recipes for Social Mobility” (including his starting point for the chart (above) provide a methodical, evidence-based way to craft, implement and assess the durability of economic policies that could help to deliver the American Dream to millions of anxious families today.

In recent months, Chetty has been doing a kind of “road show” that profiles the early progress of his AI-driven approach. I heard a lecture of his on-line from New Haven three weeks ago, which led me to another talk that he gave during a 2024 conference held at the Milken Center for [yes] Advancing the American Dream in California. The slides and quotations today are from Chetty’s Milken Center presentation and can be given either a listen or a closer look via this link to it on YouTube. 

After his first chart about “the fading American Dream,” Chetty presented an interactive U.S. map built upon meticulously assembled data that shows areas in the country where the children of low income parents have “greater” or “lesser” chances at upward social and economic mobility.  Essentially, his team gathered income data on 20 million children born in the 1980’s to households earning $27k per year in order to determine how many of those children went on to earn more than their parents—adjusted for inflation—at age 35, localized to the parts of the country where they were living at the time. 

Chetty’s Geography of Social Mobility chart.

You’ll notice—somewhat surprisingly—that in this snapshot, kids of low-income parents enjoyed the greatest upward mobility in Dubuque, Iowa while actually losing the most ground compared with their parents in Charlotte, North Carolina over this time frame. 

I had some additional reactions (beyond my amazement at the richness of the data painted here). For one thing, if I were on Chetty’s team, I would use colors other than “red” and “blue” to illustrate differences in upward mobility across the U.S. Using this color palette falls too easily (and unnecessarily) into our current Red and Blue state narratives, or exactly the kinds of prejudices that tying communities to actual data are trying to dispel.

While I watched Chetty talk about this slide, I also noticed you can scan a bar code that allows you to examine places that you might be curious about in closer detail (such as where you live) by putting in your zip code when prompted. When I did so, I already suspected that a child’s shot at upward mobility would be relatively low in my Philadelphia neighborhood, but was surprised to learn that it is far higher in many of the central Pennsylvania counties that have long been characterized as “a gun-loving, God-fearing slice of Alabama” between here and Pittsburgh.

While he spoke, Chetty highlighted “the microscopic views and comparisons” that a mapping tool like this allows, particularly when it confounds expectations. He describes, for example, how appalled Charlotte’s civic leaders were when learning about their “worst place finish” in this assessment and how it catalyzed new, similarly data-driven efforts to improve the prospects for that City’s children.

Chetty goes on to juxtapose this chart with an even more interesting one. At first glance one sees its similarities, but its differences are far more intriguing. 

Contrasting places in the U.S. where there is Economic Opportunity (or Upward Mobility) with places where there are greater or lesser amounts of Economic Connectedness and the kind of Social Capital that it produces.

The social capital that Chetty illustrates here is the same “commodity” that Bowling Alone’s Bob Putnam has been trying to build throughout his career, as described in my post a couple of weeks ago, “History Suggests that Better Days Could be Coming”. Putnam’s thesis goes like this: if you want to improve your community, state or nation, that drive begins by strengthening your in-person social connections, thereby increasing “the social capital” that’s available for spending when connected individuals wish to solve a problem or better their community’s circumstances. 

At it’s simplest, Chetty’s comparison chart shows those places in America where people from different socio-economic backgrounds are more connected to one another, less connected and where there are greater or lesser accumulations of social capital as a result.

Chetty once again reminds us that localizing massive data sets in this manner allows those using these tools to dive even deeper into neighborhood, or even into street-by-street variations in both upward mobility and social capital. 

In his “economic connectedness” map, social capital acrues from the amount of “cross-class interaction” that occurs between high and low income people in each county, town and neighborhood in the U.S. This relationship is key because Chetty’s team had already established that “the single strongest predictor of your chances of rising up is how connected you [or those most in need of “upward mobility”] are to higher income folks,” as opposed to living in a place where nearly everyone is on the same rung of the economic ladder.

To compile this chart, Chetty collaborated with Mark Zuckerberg and Facebook’s “core data science team” to access the voluminous data the social network has gathered on the 72 million Americans who use the platform. He wanted to identify low-income users and determine how many “above median income friends” each one of them has, before breaking that aggregate snapshot down with his powerful mapping tool. 

Connections across income classes produce opportunities “like getting a job referral, or an internship.” But Chetty also identified an “aspirational” component when members of different economic classes interact with one another on a regualr basis.

If you’ve never met somebody who went to college, you don’t think about that as a possibility for you. If you’re in a community where you’ve seen more people succeed in certain career pathways, that can change kid’s lives…

Once again, a few of my reactions to the comparisons these big-data snapshots invite. 

A detailed view of the mid-Atlantic in general, and Philadelphia in particular, on Chetty’s mapping of Economic Connectedness.

Despite Philadelphia’s “relatively weak” score on upward mobility, I was also not surprised that my part of the state ranks as “relatively strong” (or a medium shade of blue) when it comes to the social capital that’s produced by our economic connectedness. Among many other things, that means those of us in Southeastern Pennsylvania already have a relatively-strong foundation for driving greater upward mobility, along with more helpful data about our localized advantages and challenges as we dig deeper into our particular blocks on this map.  

On the other hand, I found the social policy solution that Chetty profiled in his talk somewhat disappointing, although it seemed to me that the experimental template that gave rise to it would be a serviceable-enough incubator for additional policies going forward. 

He describes at length a test study his team initiated in Seattle involving low income households with subsidized (formerly Title 8) housing vouchers. Their first discovery was that most voucher holders try to use them in their own communities, with little or no gain in economic connectedness. They then realized that while “real-estate brokers” are commonly used for finding places to live in higher income communities, their eqivalent is non-existent for those who want to get “the most bang for the buck” out of the $2500 credit in one of these housing vouchers. 

Chetty’s team concluded that if a sponsor (e.g. a local government, for-profit or non-profit) wanted to build social capital for low-income households, it could spend what amounted to 2% of the value of each voucher to hire “brokers” to help low-income residents find housing in communities with greater economic connectedness than the uniformly impoverished neighborhoods where most of them lived. 

This solution was affordable and it quickly built social capital for low income individuals, but even under the best of circumstances it is unlikely to impact enough households because of the limited amounts of affordable housing in most higher income communities, a fact that Chetty readily admits:

I don’t want to give the impression that I think the desegregation approach, moving people to different areas, is the only thing we should do. Obviously, that’s not going to be a scalable approach in and of itself.

But this demonstration of how to engineer a social policy illustrates the potential for modeling and testing reforms that can attract “smarter, evidence-driven investments” as mapping tools like these are refined and used by more policy makers. 

Chetty’s Seattle experiment also puts a spotlight on social programs that increase economic connectedness. While the parents who were able to move from low income communities to mixed income neighborhoods surely had an opportunity to realize gains in social capital, it’s their children who stood to benefit the most from more diverse schools, better playgrounds and exposure to career options they might never have considered before.

What motivates Chetty, his team and his hosts at the Milken Institute the most are the opportunities that these AI-driven, data-rich tools will be presenting in the very near future to the millions who are pursuing the American Dream but failing to achieve it.

Twenty years ago, a civil rights organization that sought to open pathways towards college and upward mobility had, as its memorable motto: “A mind is a terrible thing to waste.”

With a conclusion as obvious as that in mind, I’ll give Raj Chetty’s final presentation slide some of the last words here about assets that we’ve been wasting for far too long.

The box reads: “If women, minorities, and children from low income families invent at the same rate as high-income white men, the innovation rate in America wouild quadruple.“

I guess I would prefer to make this slide more powerful still.

It’s true that we’re wasting many of our most valuable people-assets in the US. today, but “delivering the American Dream more reliably” is not the legacy of “high-income white men.” First off, many of our most successful innovators today aren’t “white” but are people of color, immigrants and their descendants (like Chetty himself). Moreover, this is an 80%-of-America size problem (or everyone who’s NOT in the top 20% income-wise) not a burden that’s only carried by previously marginalized communities. I believe that Chetty’s ground-breaking work will attract the base of support that it deserves if slides like this are imodified to reflect the true magnitude of our Lost Einsteins. So I don’t know how Chetty’s team quantified the “lost opportunities” highlighted here “as quadruple” the number of our current innovators, but I’d wager that’s an undercount.

+ + +

For those who are interested, I’ve written about our frustrated pursuit of the American Dream several times before. These posts include: 

  • “The Great Resignation is an Exercise in Frustration and Futility” (citing data that government management of the economy has caused our middle and lower classes to realize essentially the same income due to government transfer payments, arguing that perverse incentives such as “these redistributions of wealth also stifle upward mobility”);
  • “Let’s Revitalize the American Dream” (citing a 2015 study that found the U.S. ranks “among the lowest of all developed countries in terms of the potential for upward mobility despite clinging to the mythology of Horatio Alger”); and
  • “America Needs a Rebranding Campaign” (If “equality of opportunity” is really our touchstone as a nation, then it “needs to infuse every brand touchpoint” of ours, including our “packaging, public relations, advertising, services, partnerships, social responsibility, HR & recruitment, loyalty programs, events & activations, user experience, sourcing & standards, and product portfolio.” In other words, America needs “to start walking the equality-of-opportunity walk,” instead of just talking about it.)

This post was adapted from my March 9, 2025 newsletter. Newsletters are delivered to subscribers’ in-boxes every Sunday morning, and sometimes I post the content from one of them here in lightly edited form. You can subscribe by leaving your email address in the column to the right.

Filed Under: *All Posts Tagged With: Ai, American dream, artificial intelligence, economic connectedness, economic opportunity, Lost Einsteins, Millken Center for Advancing the American Dream, powerful mapping tools, Raj Chetty, social capital, upward mobility

Will We Domesticate AI in Time?

February 9, 2024 By David Griesing Leave a Comment

As you know from holiday post-cards, I spent time recently with Emily and Joe in Salt Lake City. They were full, rich, activating days and I’m still sorting through the many gifts I received.

The immensity of the mountains in their corner of Utah was impossible to ignore, though I wasn’t there to ski them as much as to gaze-up in wonder at their snow-spattered crowns for the first time. Endless, low-level suburbs also extend to their foothills in every direction, and while everyone says “It’s beautiful here” they surely mean “the looking up” and not “the looking down.” 

Throughout my visit, SLC’s sprawl was a reminder of how far our built-environments fall short of our natural ones in a freedom-loving America that disdains anyone’s guidance on what to build and where.  Those who settled here have filled this impossibly majestic valley with an aimless, carmel-and-rosy low-rise jumble that from any elevation has a fog-topping of exhaust in the winter and (most likely) a mirage-y shimmer of vaporous particulates the rest of the year. 

So much for Luke 12:48 (“To whom much has been given….”)

Why not extrapolations of the original frontier towns I wondered instead of this undulating wave of discount centers, strip malls, and low-slung industrial and residential parking lots that have taken them over in every direction? 

I suppose they’re the tangible manifestation of resistance to governmental guidance (or really any kind of collective deliberation) on what we can or can’t, should or shouldn’t be doing when we create our homelands—leaving it to “the quick buck” instead of any consciously-developed, long-term vision to determine what surrounds us. 

Unfortunately I fear that this same deference to freedom (or perhaps more aptly, to its “free-market forces”) may be just as inevitable when it comes to artificial intelligence (or AI). So I have to ask: Instead of making far far less than we could from AI’s similarly awesome possibilities, why not commit to harnessing (and then nurturing) this breathtaking technology so we can achieve the fullest measure of its human-serving potential?  

Unfortunately, my wider days beneath a dazzle of mountain ranges showed me how impoverished this end game could also become.

Boris Eldagsen submitted this image, called “Pseudomnesia: The Electrician” to a recent, Sony world photography competition. When he won in the contest’s “creative open” category, he revealed that his image was AI-generated, going on to donate his prize money to charity. As Eldagsen said at the time: “Is the umbrella of photography large enough to invite AI images to enter—or would that be a mistake? With my refusal of the award, I hope to speed up this debate” about what is “real” and “acceptable” in the art world and what is not.

The debate over that and similar questions should probably begin with a summary appreciation of AI’s nearly-miraculous as well as fearsomely-catastrophic possibilities. Both were given a preview in a short interview with the so-called “Godfather of AI,” Geoffrey Hinton, on the 60 Minutes TV-newsmagazine a couple of months ago. 

Looking a bit like Dobby from the Harry Potter films, Hinton’s sense of calm and perspective offered as compelling a story as I’ve heard about the potential up- and down-sides of “the artificial neural networks” that he first helped to assemble five decades ago. Here are a few of its highlights:

  • after the inevitable rise of AI, humans will become the second most intelligent beings on Earth. For instance, in 5 years Hinton expects “that ChatGPT might well be able to reason better than us”;
  • AI systems can already understand. For example, even with the autocomplete features that interrupt us whenever we’re texting and emailing, the artificial intelligence that drives them has to “understand” what we’ve already typed as well as what we’re likely to add in order to offer up its suggestions;
  • Through trial and error, AI systems learn as they go (i.e. machine learning) so that the system’s “next” guess or recommendation is likely to be more accurate (or closer to what the user is looking for) than its “last” response. That means AI systems can improve their functioning without additional human intervention. Among other things, this capacity gives rise to fears that AI systems could gain certain advantages over or even come to dominate their human creators more generally as they continue to get smarter.
  • Hinton is proud of his contributions to AI-system development, especially the opportunities it opens in health care and in developing new drug-treatment protocols. But in addition to AI’s dominating its creators, he also fears for the millions of workers “who will no longer be valued” when AI systems take over their jobs, the even broader dissemination of “fake news” that will be turbo-charged by AI, as well as the use of AI-enabled warriors on tomorrow’s battlefields. Because of the speed of system advancements, he urges global leaders to face these challenges sooner rather than later. 

Finally, Hinton argues for broader experimentation and regulation of AI outside of the tech giants (like Microsoft, Meta and Google). Why? Because these companies’ primary interest is in monetizing a world-changing technology instead of maximizing its potential benefits for the sake of humanity. As you undoubtedly know, over the past two years many in the scientific-research, public-policy and governance communities have echoed Hinton’s concerns in widely-publicized “open letters” raising alarm over AI’s commercialization today.

Hinton’s to-do list is daunting, particularly at a time when many societies (including ours) are becoming more polarized over what constitutes “our common goods.” Maybe identifying a lodestar we could all aim for eagerly–like capitalizing on the known and (as yet unknown) promises of AI that can benefit us most–might help us to find some agreement as the bounty begins to materialize and we begin to wonder how to “spend” it. Seeing a bold and vivid future ahead of us (instead of merely the slog that comes from risk mitigation) might give us the momentum we lack today to start making more out of AI’s spectacular frontier instead of less. 

So what are the most thoughtful among us recommending in these regards? Because, once again, it will be easier to limit some of our freedoms around a new technology with tools like government regulation and oversight if we can also envision something that truly dazzles us at the end of the long, domesticating road.

Over the past several months, I’ve been following the conversation—alarm bells, recommended next steps, more alarm bells—pretty closely and it’s easy to get lost in the emotional appeals and conflicting agendas.  So I was drawn this week to the call-to-action in a short essay entitled “Why the U.S. Needs a Moonshot Mentality for AI—Led by the Public Sector.”  Its engaging appeal, co-authored by Fei-Fei Li and John Etchemendy at the Stanford Institute for Human-Centered Artificial Intelligence, is the most succinct and persuasive one I’ve encountered on what we should be doing now (and encouraging others with influence to be doing) if we want to shower ourselves with the full range of AI’s benefits while minimizing its risks.

Their essay begins with a review of the nascent legislative efforts that are currently underway in Congress to place reasonable guardrails around the most apparent of AI’s misguided uses.  A democratic government’s most essential function is to protect its citizens from those things (like foreign enemies during wartime) that only it can protect us from. AI poses that category of individual and national threat in terms of spreading disinformation, and the authors urge quick action on some combination of the pending legislative proposals.

Li and Etchemendy then talk about the parties that are largely missing from the research labs where AI is currently being developed.

As we’ve done this work, we have seen firsthand the growing gap in the capabilities of, and investment in, the public compared with private sectors when it comes to AI. As it stands now, academia and the public sector lack the computing power and resources necessary to achieve cutting edge breakthroughs in the application of AI.

This leaves the frontiers of AI solely in the hands of the most resourced players—industry and, in particular, Big Tech—and risks a brain drain from academia. Last year alone, less than 4o% of new Ph.D.s in AI went into academia and only 1% went into government jobs.

The authors are also justifiably concerned by the fact that policy makers in Washington have been listening, almost exclusively, to commercial AI developers like Sam Altman and Elon Musk and not enough to leaders from the academy and civil society. They are, if anything, even more outraged by the fact that “America’s longstanding history of creating public goods through science and technology” (think of innovations like the internet, GPS, MRIs) will be drowned out by the “increasingly hyperbolic rhetoric” that’s been coming out of the mouths of some “celebrity Silicon Valley CEOs” in recent memory.

They readily admit that “there’s nothing wrong with” corporations seeking profits from AI. The central problem is that those who might approach the technology “from a different [non-commercial] angle [simply] don’t have the [massive] computing power and resources to pursue their visions” that the profit-driven have today. It’s almost as if Li and Etchemendy want to level the playing field and introduce some competition between Big Tech and those who are interested (but currently at a disadvantage) in the academy and the public sector over who will be the first to produce the most significant “public goods” from AI.

Toward that end:

We also encourage an investment in human capital to bring more talent to the U.S. to work in the field of AI within academia and the government.

[W]hy does this matter? Because this technology isn’t just good for optimizing ad revenue for technology companies, but can fuel the next generation of scientific discovery, ranging from nuclear fusion to curing cancer.

Furthermore, to truly understand this technology, including its sometimes unpredictable emergent capabilities and behaviors, public-sector researchers urgently need to replicate and examine the under-the-hood architecture of these models. That’s why government research labs need to take a larger role in AI.

And last (but not least), government agencies (such as the National Institute of Standards and Technology) and academic institutions should play a leading role in providing trustworthy assessments and benchmarking of these advanced technologies, so the American public has a trusted source to learn what they can and can’t do. Big tech companies can’t be left to govern themselves, and it’s critical there is an outside body checking their progress.

Only the federal government can “galvanize the broad investment in AI” that produces a level-playing field where researchers within our academies and governmental bodies can compete with the brain trusts within our tech companies to produce the full harvest of public goods from a field like AI. In their eyes it will take competitive juices (like those unleashed by Sputnik which took America to the moon a little more than a decade later) to achieve AI’s true promise.  

If their argument peaks your interest like it did mine, there is a great deal of additional information on the Stanford Institute site where the authors profile their work and that of their colleagues. It includes a three-week, on-line program called AI4ALL where those who are eager to learn more can immerse themselves in lectures, hands-on research projects and mentoring activities; a description of the “Congressional bootcamp,” offered to representatives and their staffs last August and likely to be offered again; and the Institute’s white paper on building “a national AI resource” that will provide academic and non-profit researchers with the computing power and government datasets needed for both education and research.

To similar effect, I also recommend this June 12, 2023 essay in Foreign Policy. It covers some of the same territory as these Stanford researchers and similarly urges legislators to begin to “reframe the AI debate from one about public regulation to one about public development.”

It doesn’t take much to create a viral sensation, but when they were published these AI-generated images certainly created one. Here’s the short story behind “Alligator-Pow” and “-Pizza.” At some point in the future, we could look back to the olden days when AI’s primary contributions were to make us laugh or to help us to finish our text messages.

Because we’ll (hopefully) be reminiscing in a future when AI’s bounty has already changed us in far more profound and life-affirming ways.

If the waves of settlers in Salt Lake City had believed that they could build something that aspired to the grandeur of the mountains around them—like the cathedrals of the Middle Ages or even some continuation of the lovely and livable villages that many of them had left behind in Northern Europe—they might not have “paved Paradise and put up a parking lot” (as one of their California neighbors once sang).

In similar ways, having a worthy vision today, and one that’s realized by the right gathering of competitors, could make the necessary difference when it comes to artificial intelligence?

So will we domesticate AI in time? 

Only if we can gain enough vision to take us over the “risk” and “opportunity” hurdles that are inhibiting us today. 

This post was adapted from my January 7, 2024 newsletter. Newsletters are delivered to subscribers’ in-boxes every Sunday morning, and sometimes I post the content from one of them here. You can subscribe (and not miss any) by leaving your email address in the column to the right.

Filed Under: *All Posts Tagged With: Ai, artificial intelligenc, Fei-Fei Li, Geoffrey Hinton, John Etchemendy, making the most out of an opportunity, Stanford Institute for Human-Centered Artificlal Intelligence

Citizens Will Decide What’s Important in Smart Cities

July 8, 2019 By David Griesing Leave a Comment

The norms that dictate the acceptable use of artificial intelligence in technology are in flux. That’s partly because the AI-enabled, personal data gathering by companies like Google, Facebook and Amazon has caused a spirited debate about the right of privacy that individuals have over their personal information. With your “behavioral” data, the tech giants can target you with specific products, influence your political views, manipulate you into spending more time on their platforms, and weaken the control that you have over your own decision-making.
 
In most of the debate about the harms of these platforms thus far, our privacy rights have been poorly understood.  In fact, our anything-but-clear commitments to the integrity of our personal information have enabled these tech giants to overwhelm our initial, instinctive caution as they seduced us into believing that “free” searches, social networks or next day deliveries might be worth giving them our personal data in return. Moreover, what alternatives did we have to the exchange they were offering?

  • Where were the privacy-protecting search engines, social networks and on-line shopping hubs?
  • Moreover, once we got hooked on to these data-sucking platforms, wasn’t it already too late to “put the ketchup back in the bottle” where our private information was concerned? Don’t these companies (and the data brokers that enrich them) already have everything that they need to know about us?

Overwhelmed by the draw of  “free” services from these tech giants, we never bothered to define the scope of the privacy rights that we relinquished when we accepted their “terms of service.”  Now, several years into this brave new world of surveillance and manipulation, many feel that it’s already too late to do anything, and even if it weren’t, we are hardly willing to relinquish the advantages of these platforms when they are unavailable elsewhere. 
 
So is there really “no way out”?  
 
A rising crescendo of voices is gradually finding a way, and they are coming at it from several different directions.
 
In places like Toronto (London, Helsinki, Chicago and Barcelona) policy makers and citizens alike are defining the norms around personal data privacy at the same time that they’re grappling with the potential fallout of similar data-tracking, analyzing and decision-making technologies in smart-city initiatives.
 
Our first stop today is to eavesdrop on how these cities are grappling with both the advantages and harms of smart-city technologies, and how we’re all learning—from the host of scenarios they’re considering—why it makes sense to shield our personal data from those who seek to profit from it.  The rising debate around smart-city initiatives is giving us new perspectives on how surveillance-based technologies are likely to impact our daily lives and work. As the risks to our privacy are played out in new, easy-to-imagine contexts, more of us will become more willing to protect our personal information from those who could turn it against us in the future.
 
How and why norms change (and even explode) during civic conversations like this is a topic that Cass Sunstein explores in his new book How Change Happens. Sunstein considers the personal impacts when norms involving issues like data privacy are in flux, and the role that understanding other people’s priorities always seems to play. Some of his conclusions are also discussed below. As “dataveillance” is increasingly challenged and we contextualize our privacy interests even further, the smart-city debate is likely to usher in a more durable norm regarding data privacy while, at the same time, allowing us to realize the benefits of AI-driven technologies that can improve urban efficiency, convenience and quality of life.
 
With the growing certainty that our personal privacy rights are worth protecting, it is perhaps no coincidence that there are new companies on the horizon that promise to provide access to the on-line services we’ve come to expect without our having to pay an unacceptable price for them.  Next week, I’ll be sharing perhaps the most promising of these new business models with you as we begin to imagine a future that safeguards instead of exploits our personal information. 

1.         Smart-City Debates Are Telling Us Why Our Personal Data Needs Protecting

Over the past 6 months, I’ve talked repeatedly about smart-city technologies and one of you reached out to me this week wondering:  “What (exactly) are these new “technologies”?”  (Thanks for your question, George!).  
 
As a general matter, smart-city technologies gather and analyze information about how a city functions, while improving urban decision-making around that new information. Throughout, these data-gathering,  analyzing, and decision-making processes rely on artificial intelligence. In his recent article “What Would It Take to Help Cities Innovate Responsibly With AI?” Eddie Copeland begins by describing the many useful things that AI enables us to do in this context: 

AI can codify [a] best practice and roll it out at scale, remove human bias, enable evidence-based decision making in the field, spot patterns that humans can’t see, optimise systems too complex for humans to model, quickly digest and interpret vast quantities of data and automate demanding cognitive activities.

In other words, in a broad range of urban contexts, a smart-city system with AI capabilities can make progressively better decisions about nearly every aspect of a city’s operations by gaining an increasingly refined understanding of how its citizens use the city and are, in turn, served by its managers.
 
Of course, the potential benefits of greater or more equitable access to city services as well as their optimized delivery are enormous. Despite some of the current hew and cry, a smart-cities future does not have to resemble Big Brother. Instead, it could liberate time and money that’s currently being wasted, permitting their reinvestment into areas that produce a wider variety of benefits to citizens at every level of government.
 
Over the past weeks and months, I’ve been extolling the optimism that drove Toronto to launch its smart-cities initiative called Quayside and how its debate has entered a stormy patch more recently. Amidst the finger pointing among Google affiliate Sidewalk Labs, government leaders and civil rights advocates, Sidewalk (which is providing the AI-driven tech interface) has consistently stated that no citizen-specific data it collects will be sold, but the devil (as they say) remains in the as-yet to be disclosed details. This is from a statement the company issued in April:

Sidewalk Labs is strongly committed to the protection and privacy of urban data. In fact, we’ve been clear in our belief that decisions about the collection and use of urban data should be up to an independent data trust, which we are proposing for the Quayside project. This organization would be run by an independent third party in partnership with the government and ensure urban data is only used in ways that benefit the community, protect privacy, and spur innovation and investment. This independent body would have full oversight over Quayside. Sidewalk Labs fully supports a robust and healthy discussion regarding privacy, data ownership, and governance. But this debate must be rooted in fact, not fiction and fear-mongering.

As a result of experiences like Toronto’s (and many others, where a new technology is introduced to unsuspecting users), I argued in last week’s post for longer “public ventilation periods” to understand the risks as well as rewards before potentially transformative products are launched and actually used by the public.
 
In the meantime, other cities have also been engaging their citizens in just this kind of information-sharing and debate. Last week, a piece in the New York Times elaborated on citizen-oriented initiatives in Chicago and Barcelona after noting that:

[t]he way to create cities that everyone can traverse without fear of surveillance and exploitation is to democratize the development and control of smart city technology.

While Chicago was developing a project to install hundreds of sensors throughout the city to track air quality, traffic and temperature, it also held public meetings and released policy drafts to promote a City-wide discussion on how to protect personal privacy. According to the Times, this exchange shaped policies that reduced, among other things, the amount of footage that monitoring cameras retained. For its part, Barcelona has modified its municipal procurement contracts with smart cities technology vendors to announce its intentions up front about the public’s ownership and control of personal data.
 
Earlier this year, London and Helsinki announced a collaboration that would enable them to share “best practices and expertise” as they develop their own smart-city systems. A statement by one driver of this collaboration, Smart London, provides the rationale for a robust public exchange:

The successful application of AI in cities relies on the confidence of the citizens it serves.
 
Decisions made by city governments will often be weightier than those in the consumer sphere, and the consequences of those decisions will often have a deep impact on citizens’ lives.
 
Fundamentally, cities operate under a democratic mandate, so the use of technology in public services should operate under the same principles of accountability, transparency and citizens’ rights and safety — just as in other work we do.

To create “an ethical framework for public servants and [a] line-of-sight for the city leaders,” Smart London proposed that citizens, subject matter experts, and civic leaders should all ask and vigorously debate the answers to the following 10 questions:

  • Objective– why is the AI needed and what outcomes is it intended to enable?
  • Use– in what processes and circumstances is the AI appropriate to be used?
  • Impacts– what impacts, good and bad, could the use of AI have on people?
  • Assumptions– what assumptions is the AI based on, and what are their iterations and potential biases?
  •  Data– what data is/was the AI trained on and what are their iterations and potential biases?
  • Inputs– what new data does the AI use when making decisions?
  • Mitigation– what actions have been taken to regulate the negative impacts that could result from the AI’s limitations and potential biases?
  • Ethics– what assessment has been made of the ethics of using this AI? In other words, does the AI serve important, citizen-driven needs as we currently understand those priorities?
  • Oversight– what human judgment is needed before acting on the AI’s output and who is responsible for ensuring its proper use?
  • Evaluation– how and by what criteria will the effectiveness of the AI in this smart-city system be assessed and by whom?

As stakeholders debate these questions and answers, smart-city technologies with broad-based support will be implemented while citizens gain a greater appreciation of the privacy boundaries they are protecting.
 
Eddie Copeland, who described the advantages of smart-city technology above, also urges that steps beyond a city-wide Q&A be undertaken to increase the awareness of what’s at stake and enlist the public’s engagement in the monitoring of these systems.  He argues that democratic methods or processes need to be established to determine whether AI-related approaches are likely to solve a specific problem a city faces; that the right people need to be assembled and involved in the decision-making regarding all smart-city systems; and that this group needs to develop and apply new skills, attitudes and mind-sets to ensure that these technologies maintain their citizen-oriented focus. 
 
As I argued last week, the initial ventilation process takes a long, hard time. Moreover, it is difficult (and maybe impossible) to conduct if negotiations with the technology vendor are on-going or that vendor is “on the clock.”
 
Democracy should have the space and time to be a proactive instead of reactive whenever transformational tech-driven opportunities are presented to the public.

(AP Photo/David Goldman)

2.         A Community’s Conversation Helps Norms to Evolve, One Citizen at a Time

I started this post with the observation that many (if not most) of us initially felt that it was acceptable to trade access to our personal data if the companies that wanted it were providing platforms that offered new kinds of enjoyment or convenience. Many still think it’s an acceptable trade. But over the past several years, as privacy advocates have become more vocal, leading jurisdictions have begun to enact data-privacy laws, and Facebook has been criticized for enabling Russian interference in the 2016 election and the genocide in Myanmar, how we view this trade-off has begun to change.  
 
In a chapter of his new book How Change Happens, legal scholar Cass Sunstein argues that these kinds of widely-seen developments:

can have a crucial and even transformative signaling effect, offering people information about what others think. If people hear the signal, norms may shift, because people are influenced by what they think other people think.

Sunstein describes what happens next as an “unleashing” process where people who never formed a full-blown preference on an issue like “personal data privacy (or were simply reluctant to express it because the trade-offs for “free” platforms seemed acceptable to everybody else), now become more comfortable giving voice to their original qualms. In support, he cites a remarkable study about how a norm that gave Saudi Arabian husbands decision-making power over their wives’ work-lives suddenly began to change when actual preferences became more widely known.

In that country, there remains a custom of “guardianship,” by which husbands are allowed to have the final word on whether their wives work outside the home. The overwhelming majority of young married men are privately in favor of female labor force participation. But those men are profoundly mistaken about the social norm; they think that other, similar men do not want women to join the labor force. When researchers randomly corrected those young men’s beliefs about what other young men believed, they became far more willing to let their wives work. The result was a significant impact on what women actually did. A full four months after the intervention, the wives of men in the experiment were more likely to have applied and interviewed for a job.

When more people either speak up about their preferences or are told that others’ inclinations are similar to theirs, the prevailing norm begins to change.
 
A robust, democratic process that debates the advantages and risks of AI-driven, smart city technologies will likely have the same change-inducing effect. The prevailing norm that finds it acceptable to exchange our behavioral data for “free” tech platforms will no longer be as acceptable as it once was. The more we ask the right questions about smart-city technologies and the longer we grapple as communities with the acceptable answers, the faster the prevailing norm governing personal data privacy will evolve.  
 
Our good work of citizens is to become more knowledgeable about the issues and to champion what is important to us in dialogue with the people who live and work along side of us. More grounds for protecting our personal information are coming out of the smart-cities debate and we are already deciding where new privacy lines should be drawn around us. 

This post was adapted from my July 7, 2019 newsletter. When you subscribe, a new newsletter/post will be delivered to your inbox every Sunday morning.

Filed Under: *All Posts, Being Part of Something Bigger than Yourself, Building Your Values into Your Work, Continuous Learning Tagged With: Ai, artificial intelligence, Cass Sunstein, dataveillance, democracy, how change happens, norms, personal data brokers, personal privacy, privacy, Quayside, Sidewalk Labs, smart cities, Smart City, surveillance capitalism, Toronto, values

  • 1
  • 2
  • Next Page »

About David

David Griesing (@worklifeward) writes from Philadelphia.

Read More →

Subscribe to my Newsletter

Join all the others who have new posts, recommendations and links to explore delivered to their inboxes every week. Please subscribe below.

David Griesing Twitter @worklifereward

My Forthcoming Book

WordLifeReward Book

Search this Site

Recent Posts

  • Using AI to Help Produce Independent, Creative & Resilient Adults in the Classroom September 10, 2025
  • Will AI Make Us Think Less or Think Better? July 26, 2025
  • The Democrat’s Near-Fatal “Boys & Men” Problem June 30, 2025
  • Great Design Invites Delight, Awe June 4, 2025
  • Liberating Trump’s Good Instincts From the Rest April 21, 2025

Follow Me

David Griesing Twitter @worklifereward

Copyright © 2025 David Griesing. All Rights Reserved.

  • Terms of Use
  • Privacy Policy