David Griesing | Work Life Reward Author | Philadelphia

  • Blog
  • About
    • Biography
    • Teaching and Training
  • Book
    • WorkLifeReward
  • Newsletter Archive
  • Contact
You are here: Home / Archives for Ai

Great Design Invites Delight, Awe

June 4, 2025 By David Griesing Leave a Comment

I replaced my iPhone this week. With my updated MacBook, they’re my two most essential devices.  I realized that I interact with them more than with anyone else most days (except maybe Wally). 

I’m startled to write this. 

The new phone brought renewed wonderment (even awe) at what a personal device like this can do, how they respond to my touch, and how indispensable they’ve become. It’s almost as if they should have names. Almost.

Steve Jobs and his lead designer at Apple, Jony Ives, wanted to create elegantly companionable objects that we’d want to touch, look at and listen to. 

My current laptop still has that original, brushed aluminum skin that feels soft to my fingertips. One of the new, suggested screen-savers on my new phone features a low earth satellite image of my corner of the globe changing from the blues of daylight to the light specs of dusk as the day unfolds into night. Details like these add up. 

So I was more than intrigued to learn this week that the same Jony Ives who’d imagined these devices into existence with Jobs has just joined up with the man who might be Jobs’ tech-world successor to bring us what they hope to be our Third Core Device (alongside the MacBook and iPhone). 

In a video announcement that is pretty relatable in its own right, Sam Altman (the progenitor of the AI-driven ChatGPT) and Ives tell us the story of how, over the past few years, they decided to combine their resources to create and market a pocket-size, screen-free, non-wearable device that–through the mustering of  its AI capabilities–could become our first “contextually aware” desk-top device.  

So while I’ll eventually get to my weekly update of Family Strongman developments (including a warning about the next phase of the current regime)—it was a great relief this week to imagine with Altman and Ives how they (as opposed to Trump) might also be seizing part of my future before too many more months have passed. 

So back to the Jony & Sam origin-story video that entered so many of our feeds in recent days. 

As is evident from the photo of Ives and Altman up-top, there seems to be some chemistry here, two “creative visualizers” sharing similar wave lengths, not unlike when Ives and Jobs were imagineering the iPhone, and both of them could “just about see it.”

(I did a visuali collaboration like this when I was just starting out. I had a play space for kids in mind and wanted to show others what it might look like because words could only take me and my listeners so far. Working with a collaborator “to see what I was seeing” was a powerful, deeply personal experience, and ultimately a highly practical one.  Over several weeks, my visualizer made beautiful color drawings of what he’d heard me describe—and they became welcomed accompaniments for my show & tells when I took my business plan on the road. They also pointed towards what my imagined space might have looked like if we had been even more in sync.)

During their recent announcement, Ives and Altman seemed to be sharing the kind of strangely intimate, “never-seen-before (except by the two of us)” mind space that I’d gotten only a glimpse of. 

In a way, their story begins with Jobs passing and continues with Ives leaving diverse roles Apple six years ago to begin a design-work-experiment in the Jackson Square section of San Francisco. Together with Australian industrial designer Marc Newson, both hired architects, graphic designers, writers and a cinematic special effects developer, all of whom were invited to work across three areas: work for the love of it (which they did without pay); work for some high-end clients (which paid the rent) ; and work for themselves (which included renovating a block’s worth of historical buildings into their new home).

Two years ago, Charlie (one of Ives’ 21-year-old twin sons) told him about ChatGPT. Following his son’s excitement over the chatbot to its source, Ives connected with Altman and so began Ives’ next partnership. In the video—which is at once promotional, anticipatory and destiny-drenched—Ives describes Altman, a longtime entrepreneur and founder of OpenAI, as “a rare visionary” who “shoulders incredible responsibility, but his curiosity, his humility remain utterly inspiring.”  Altman describes Ives as “the deepest thinker of anyone I have ever met. What that leads him to be able to come up with is unmatched.” In a WSJ article about their new partnership, Ives is quoted as saying: “The way that we clicked, and the way that we’ve been able to work together, has been profound for me.” 

Their brotherhood launched, and eighteen months ago Altman sent OpenAI’s chief product developer to work with Ives’ team of “subject matter experts” in hardware and software engineering, physics, and product manufacturing. Last fall, the two sides became excited about a specific device they were fabricating, and Altman started taking prototypes home to use. He told the Journal: that their goal is to release a new AI-driven device (that is, tens of millions of units) to the public by late next year. 

Of course, their even larger goal is to realize more of the promise of human-directed artificial intelligence than has been realized to date. As Altman says in the video:

“[With AI] I’m [already] two or three times more productive as a scientist that I was before. I’m two or three times faster to find a cure for cancer than I was before, because I have this incredible external brain.” [emphasis added]

And despite regular cautions about the future of AI—including here, in “Will We Domesticate AI in Time?”—Ives speaks with almost childlike delight and wonder about what this new device might start bringing us at the video’s conclusion.

“I think this will be one of these moments of just an absolute embarrassment of riches, of what people go create for collective society. I am absolutely certain that we are literally on the brink of a new generation of technology that can make us our better selves.”

It’s a beautifully expansive thought for a sadly claustrophobic time. 

Of course there’s more to this partnership and product announcement than the principals’ affection for each other and an eagerness to improve our lives with a new “family” of “AI companions” that can complement our phones, laptops and other screened devices.

But some of the story is also about their regrets and dissatisfactions with the tech-driven choices that we have today. For example, in the New York Times’ coverage of their new collaboration, Ives admits: “I shoulder a lot of the responsibility for what these things have brought us,” referring to the anxiety and distraction that come with being constantly connected to the computer-phone that he, almost as much as anyone, put in all of our pockets.

For his part, Altman focused on the information overload that technology without a gate-keeper assaults us with these days. 

I don’t feel good about my relationship with technology right now. It feels a lot like being jostled on a crowded street in New York, or being bombarded with notifications and flashing lights in Las Vegas.

He told the Times that his goal was to leverage A.I. to help people make “better sense of the noise.”

Altman and Ives will also have to navigate a highly competitive environment if they’re to succeed. Facebook parent Meta, Google parent Alphabet, Snap and Apple all have their own approaches to AI devices in development. Farthest along in development seem to be wearable glassses.  An article in Barrons at the time of the announcement mentions a now-defunct company that was started by two ex-Apple employees and where Altman had also been an investor to make a non-wearable device.  Unable to overcome the technical challenges, the so-called Humane AI Pin was quickly undermined by performance glitches and scathing tech reviews.

Barrons also brought a whiff of high-brow dabbling to Ives own track record since he left Apple in 2019:

All of [it]’s work has been in the ultra-luxury class. So far, its product designs are a jacket that starts at $2,000, a $60,000 limited edition turntable, some work for Ferrari…a logo and emblem for King Charles… and a partnership with Airbnb.

Nevertheless, if Altman and Ives revolutionary device succeeds, it could change the way that AI has been delivered and experienced by early adopters of chatbots over the last two years. As the WSJ noted: 

While Apple and Google have struggled to keep pace with AI innovations, many investors see the two companies—whose software runs nearly all the world’s smartphones—as the primary means through which billions of people will access AI tools and chatbots. Building a device is the only way OpenAI and other artificial-intelligence companies will be able to interact with consumers directly.

In other words, Altman and Ives could effectively by-pass both Apple and Google with their new device, and all of the recent press reports provide tantalizing clues about how it will deliver—for both better and worse.

Barrons reports that their new device will likely be cloud-based. Despite that advantage, it will also stop working when off-network and may be marred with “latency” (or delays in transmission) even when users are connected.

More groundbreakingly, both the Times and the Journal suggest that there will be something like sentience in the new device. 

The Times authors say Altman and Ives could spur what is known as “ambient computing” or a technology that makes digital interactions both natural and unobtrusive, as when a device in the background adapts to your needs without needing your commands. “Rather than typing and taking photographs on smartphones,” they note, such devices “could process the world in real time, fielding questions and analyzing images and sounds in seamless ways.”

Meanwhile, the WSJ teases (or frightens) us with the possibility that “[t]he product will be capable of being fully aware of a user’s surroundings and life, will be unobtrusive, able to rest in one’s pocket or on one’s desk”—in other words, more like a helpful companion or alter-ego than anything Silicon Valley has delivered to us before. 

At least for now, I’m choosing to see these possibilities as closer to magical than to anything else. 

A White House door-knob

This week’s Family Strongman links, images and comments include more on the president’s greed and self-dealing, which regularly feed the gilded image that he has of himself. 

They include moments that would be purely laughable if they did not also illustrate how Trump’s “art of the deal” is undermining America’s ability to negotiate in our country’s best interests with rivals like China and Russia. 

And I’ll leave you this week with some haunting observations by NYT columnist M Gessen (who was born in the Soviet Union) on how difficult it can be to maintain the necessary level of public outrage and push-back to the cumulative actions of any authoritarian regime.

Here are this week’s Trump-related comments, links and images to help keep you on the frontlines:

1.    In Vanity Fair, right before the 2016 election, it’s good to recall that fellow New Yorker Fran Lebowitz called Mr. Trump “a poor person’s idea of a rich person.”

2.    “As the Trumps Monetize the Presidency, Profits Outstrip Protest” includes New York Times reporter Peter Baker’s take on at least two of the reasons why the president and his family have faced less resistance and outrage than they normally would have–and each is Trump-engineered. 

Mr. Trump, the first convicted felon elected president, has erased ethical boundaries and dismantled the instruments of accountability that constrained his predecessors. There will be no official investigations because Mr. Trump has made sure of it. He has fired government inspectors general and ethics watchdogs, installed partisan loyalists to run the Justice Department, F.B.I. and regulatory agencies and dominated a Republican-controlled Congress unwilling to hold hearings.

As a result, while Democrats and other critics of Mr. Trump are increasingly trying to focus attention on the president’s activities, they have had a hard time gaining any traction without the usual mechanisms of official review. And at a time when Mr. Trump provokes a major news story every day or even every hour — more tariffs on allies, more retribution against enemies, more defiance of court orders — rarely does a single action stay in the headlines long enough to shape the national conversation.

3.    After Trump repeatedly reverses course on tariff threats, the TACO meme—Trump Always Chickens Out—went viral on social media this week. While it’s the kind of humor that punctures his ego nicely by calling his staying-power into question, it’s also how he’s being viewed as a weakling in international disputes with adversaries like China and Russia that have significant consequences for American’s pocketbooks, at a minimum, and our national security, more broadly.   

4.    In “Beware: We are Entering a New Stage in the Trump Era,”  A few days ago, M Gessen wrote about (a) the cruelties that have taken place in America over the last 5 months, such as the “gutting constitutional rights and civil protections,” chainsawing federal jobs without a efficiency plan, “brutal deportations,” “people snatched off the streets and disappeared in unmarked cars,” attacks on universities and law firms; (b) how we’re wired as humans to eventually resign ourselves to harsh “realities” that we feel helpless to change; and (c) the consequences for our country when too many of us are “normalizing” Trump’s regime at a time when resistance to it is needed the most. In other words, Gessen fears that we’re almost becoming enervated by our opposition and alarm, and how dangerous that can be in a democracy that requires a certain level of citizen engagement. 

For my part, I’m trying to both maintain “the threat level” and focus on “the next turn of the regime” by occasionally stepping down from the barricades, like when I’m feeling wonder and even awe at the possibility of a new, transformational and maybe even more humane devices that could enrich my life and work in ways I can barely imagine. 

This post was adapted from my June 1, 2025 newsletter. Newsletters are delivered to subscribers’ in-boxes every Sunday morning, and sometimes I post the content from one of them here, in lightly edited form. You can subscribe by leaving your email address in the column to the right.

Filed Under: *All Posts Tagged With: Ai, artificial intelligence, creative visualizing, design, engineering, humane technology, industrial design, Io, Jony Ives, OpenAI, Sam Altman, screenless handheld AI powered device, tech optimism

Delivering the American Dream More Reliably

March 30, 2025 By David Griesing Leave a Comment

We hear a lot about the dangers and “hallucinations” of AI as we test-drive our large language models. At the same time, we probably hear too little about how AI is helping us to advance our body of knowledge by processing huge volumes of data in previously unimagined ways. The benefits don’t always outweigh the risks, but sometimes they do—and in an unprecedented fashion.

I’m thinking today about how AI-driven assessments are starting to tell us whether social policy “fixes” that we implement today are actually achieving their intended results instead of speculating about their possible “pay offs” 10 or 20 years later. These new assessments can help us to determine “the returns on our investments” when we attempt to improve our society by (say) providing paternity leave for fathers, multiplying our social connections, or enhancing the stock of affordable housing in vibrant communities.

Artificial intelligence is already enabling us to identify and refine the variables for public policy success beforehand and to keep track of the resulting benefits in something that approaches real time. 

I’ve written here several times about how too many of us are failing to achieve the American Dream. Straightforwardly, that’s whether our economy is affording our nation’s children the opportunity to do better economically than their parents over succeeding generations. For Nobel Laureate Edmund Phelps, attaining that American Dream provides a “flourishing” that brings both us and our children psychic benefits (like pride and enhanced self-esteem) as well as greater prosperity and foreward momentum. We calculate the likelihood of its returns in measures of opportunity—from having many opportunites to improve ourselves to having almost none available at all.

Over the past 90 years, for millions in the U.S. (or nearly everyone whose family is not in the top 20 percent income-wise) the quest to attain the American Dream has been disappointing at best, soul-crushing at worst. Our inablity to reliably improve either our fortunes or those of the generations that succeed us unleashes a cascade of unfortunate consequences, such as widespread pessimism about the future, cynicism about our politics, an ever-widening gulf between economic “winners” and “losers,” a rise in “deaths of despair,” and a willingness to gamble on a leader who promises “a new golden age” but never reveals how anyone “who hasn’t gotten there already” will be able to reach it. 

This is a chart, from a presentation at the Milken Institute by economist and Harvard professor Raj Chetty, includes income measures from tax returns for both parents and their children (both at age 30). Using AI tools, it tracks “the percent of children earning more than their parents” through the mid-1980’s. (The overall percentage has not improved between then and now.) Today, like in 1985, it is “essentially a 50/50 coin flip as to whether you are going to achieve the American Dream.” (A link that will enable a closer view of this chart, along with others included here, is provided below.)

During the New Deal of the 1930s and Great Society of the 1960s, a raft of social programs was launched to give Americans “who worked hard and were willing to sacrifice for the sake of better tomorrows” greater opportunities to improve their circumstances and live to enjoy “the even greater success” of their children. Unfortunately, our prior attempts to engineer the “economic playing field” so that it delivers the American Dream more reliably have often been little better than “shots in the dark.”

For instance, many New Deal initiatives didn’t succeed until the economic engines of the Second World War kicked in. The “anti-poverty” programs of the Great Society bore fruit in some areas (such as voters’ rights) while causing unexpected consequences in others, like the weakening of low-income families when welfare checks effectively “replaced” fathers’ traditional roles as breadwinners. In those days, policymakers meant well but lacked the assessment tools to know whether their fixes were working until 10 or 20 years out, when they’d sometimes discover that the original problem persisted, or the collateral damage from the policy itself became evident.

Today, new policy-making tools are eliminating much of this guess-work. AI-driven data gathering, experimentation within different communities, and almost “real-time” assessments of progress have begun to transform the ways that new economic policies are developed and implemented.  Raj Chetty, the teacher and economist pictured here, is at the forefront of this sea change.

I’m profiling his work today because of the results he, his team and his fellow-travelers in this big-data-driven space are beginning to achieve. But this work also injects a note of optimism into an increasingly pessimistic time. Policy delivery like Chetty’s points towards a future with greater economic promise than the majority of us can see today–when inflation persists, tariffs threaten even higher prices, and government safety nets are dismantled without apparent gains in efficiency. What Chetty calls his “Recipes for Social Mobility” (including his starting point for the chart (above) provide a methodical, evidence-based way to craft, implement and assess the durability of economic policies that could help to deliver the American Dream to millions of anxious families today.

In recent months, Chetty has been doing a kind of “road show” that profiles the early progress of his AI-driven approach. I heard a lecture of his on-line from New Haven three weeks ago, which led me to another talk that he gave during a 2024 conference held at the Milken Center for [yes] Advancing the American Dream in California. The slides and quotations today are from Chetty’s Milken Center presentation and can be given either a listen or a closer look via this link to it on YouTube. 

After his first chart about “the fading American Dream,” Chetty presented an interactive U.S. map built upon meticulously assembled data that shows areas in the country where the children of low income parents have “greater” or “lesser” chances at upward social and economic mobility.  Essentially, his team gathered income data on 20 million children born in the 1980’s to households earning $27k per year in order to determine how many of those children went on to earn more than their parents—adjusted for inflation—at age 35, localized to the parts of the country where they were living at the time. 

Chetty’s Geography of Social Mobility chart.

You’ll notice—somewhat surprisingly—that in this snapshot, kids of low-income parents enjoyed the greatest upward mobility in Dubuque, Iowa while actually losing the most ground compared with their parents in Charlotte, North Carolina over this time frame. 

I had some additional reactions (beyond my amazement at the richness of the data painted here). For one thing, if I were on Chetty’s team, I would use colors other than “red” and “blue” to illustrate differences in upward mobility across the U.S. Using this color palette falls too easily (and unnecessarily) into our current Red and Blue state narratives, or exactly the kinds of prejudices that tying communities to actual data are trying to dispel.

While I watched Chetty talk about this slide, I also noticed you can scan a bar code that allows you to examine places that you might be curious about in closer detail (such as where you live) by putting in your zip code when prompted. When I did so, I already suspected that a child’s shot at upward mobility would be relatively low in my Philadelphia neighborhood, but was surprised to learn that it is far higher in many of the central Pennsylvania counties that have long been characterized as “a gun-loving, God-fearing slice of Alabama” between here and Pittsburgh.

While he spoke, Chetty highlighted “the microscopic views and comparisons” that a mapping tool like this allows, particularly when it confounds expectations. He describes, for example, how appalled Charlotte’s civic leaders were when learning about their “worst place finish” in this assessment and how it catalyzed new, similarly data-driven efforts to improve the prospects for that City’s children.

Chetty goes on to juxtapose this chart with an even more interesting one. At first glance one sees its similarities, but its differences are far more intriguing. 

Contrasting places in the U.S. where there is Economic Opportunity (or Upward Mobility) with places where there are greater or lesser amounts of Economic Connectedness and the kind of Social Capital that it produces.

The social capital that Chetty illustrates here is the same “commodity” that Bowling Alone’s Bob Putnam has been trying to build throughout his career, as described in my post a couple of weeks ago, “History Suggests that Better Days Could be Coming”. Putnam’s thesis goes like this: if you want to improve your community, state or nation, that drive begins by strengthening your in-person social connections, thereby increasing “the social capital” that’s available for spending when connected individuals wish to solve a problem or better their community’s circumstances. 

At it’s simplest, Chetty’s comparison chart shows those places in America where people from different socio-economic backgrounds are more connected to one another, less connected and where there are greater or lesser accumulations of social capital as a result.

Chetty once again reminds us that localizing massive data sets in this manner allows those using these tools to dive even deeper into neighborhood, or even into street-by-street variations in both upward mobility and social capital. 

In his “economic connectedness” map, social capital acrues from the amount of “cross-class interaction” that occurs between high and low income people in each county, town and neighborhood in the U.S. This relationship is key because Chetty’s team had already established that “the single strongest predictor of your chances of rising up is how connected you [or those most in need of “upward mobility”] are to higher income folks,” as opposed to living in a place where nearly everyone is on the same rung of the economic ladder.

To compile this chart, Chetty collaborated with Mark Zuckerberg and Facebook’s “core data science team” to access the voluminous data the social network has gathered on the 72 million Americans who use the platform. He wanted to identify low-income users and determine how many “above median income friends” each one of them has, before breaking that aggregate snapshot down with his powerful mapping tool. 

Connections across income classes produce opportunities “like getting a job referral, or an internship.” But Chetty also identified an “aspirational” component when members of different economic classes interact with one another on a regualr basis.

If you’ve never met somebody who went to college, you don’t think about that as a possibility for you. If you’re in a community where you’ve seen more people succeed in certain career pathways, that can change kid’s lives…

Once again, a few of my reactions to the comparisons these big-data snapshots invite. 

A detailed view of the mid-Atlantic in general, and Philadelphia in particular, on Chetty’s mapping of Economic Connectedness.

Despite Philadelphia’s “relatively weak” score on upward mobility, I was also not surprised that my part of the state ranks as “relatively strong” (or a medium shade of blue) when it comes to the social capital that’s produced by our economic connectedness. Among many other things, that means those of us in Southeastern Pennsylvania already have a relatively-strong foundation for driving greater upward mobility, along with more helpful data about our localized advantages and challenges as we dig deeper into our particular blocks on this map.  

On the other hand, I found the social policy solution that Chetty profiled in his talk somewhat disappointing, although it seemed to me that the experimental template that gave rise to it would be a serviceable-enough incubator for additional policies going forward. 

He describes at length a test study his team initiated in Seattle involving low income households with subsidized (formerly Title 8) housing vouchers. Their first discovery was that most voucher holders try to use them in their own communities, with little or no gain in economic connectedness. They then realized that while “real-estate brokers” are commonly used for finding places to live in higher income communities, their eqivalent is non-existent for those who want to get “the most bang for the buck” out of the $2500 credit in one of these housing vouchers. 

Chetty’s team concluded that if a sponsor (e.g. a local government, for-profit or non-profit) wanted to build social capital for low-income households, it could spend what amounted to 2% of the value of each voucher to hire “brokers” to help low-income residents find housing in communities with greater economic connectedness than the uniformly impoverished neighborhoods where most of them lived. 

This solution was affordable and it quickly built social capital for low income individuals, but even under the best of circumstances it is unlikely to impact enough households because of the limited amounts of affordable housing in most higher income communities, a fact that Chetty readily admits:

I don’t want to give the impression that I think the desegregation approach, moving people to different areas, is the only thing we should do. Obviously, that’s not going to be a scalable approach in and of itself.

But this demonstration of how to engineer a social policy illustrates the potential for modeling and testing reforms that can attract “smarter, evidence-driven investments” as mapping tools like these are refined and used by more policy makers. 

Chetty’s Seattle experiment also puts a spotlight on social programs that increase economic connectedness. While the parents who were able to move from low income communities to mixed income neighborhoods surely had an opportunity to realize gains in social capital, it’s their children who stood to benefit the most from more diverse schools, better playgrounds and exposure to career options they might never have considered before.

What motivates Chetty, his team and his hosts at the Milken Institute the most are the opportunities that these AI-driven, data-rich tools will be presenting in the very near future to the millions who are pursuing the American Dream but failing to achieve it.

Twenty years ago, a civil rights organization that sought to open pathways towards college and upward mobility had, as its memorable motto: “A mind is a terrible thing to waste.”

With a conclusion as obvious as that in mind, I’ll give Raj Chetty’s final presentation slide some of the last words here about assets that we’ve been wasting for far too long.

The box reads: “If women, minorities, and children from low income families invent at the same rate as high-income white men, the innovation rate in America wouild quadruple.“

I guess I would prefer to make this slide more powerful still.

It’s true that we’re wasting many of our most valuable people-assets in the US. today, but “delivering the American Dream more reliably” is not the legacy of “high-income white men.” First off, many of our most successful innovators today aren’t “white” but are people of color, immigrants and their descendants (like Chetty himself). Moreover, this is an 80%-of-America size problem (or everyone who’s NOT in the top 20% income-wise) not a burden that’s only carried by previously marginalized communities. I believe that Chetty’s ground-breaking work will attract the base of support that it deserves if slides like this are imodified to reflect the true magnitude of our Lost Einsteins. So I don’t know how Chetty’s team quantified the “lost opportunities” highlighted here “as quadruple” the number of our current innovators, but I’d wager that’s an undercount.

+ + +

For those who are interested, I’ve written about our frustrated pursuit of the American Dream several times before. These posts include: 

  • “The Great Resignation is an Exercise in Frustration and Futility” (citing data that government management of the economy has caused our middle and lower classes to realize essentially the same income due to government transfer payments, arguing that perverse incentives such as “these redistributions of wealth also stifle upward mobility”);
  • “Let’s Revitalize the American Dream” (citing a 2015 study that found the U.S. ranks “among the lowest of all developed countries in terms of the potential for upward mobility despite clinging to the mythology of Horatio Alger”); and
  • “America Needs a Rebranding Campaign” (If “equality of opportunity” is really our touchstone as a nation, then it “needs to infuse every brand touchpoint” of ours, including our “packaging, public relations, advertising, services, partnerships, social responsibility, HR & recruitment, loyalty programs, events & activations, user experience, sourcing & standards, and product portfolio.” In other words, America needs “to start walking the equality-of-opportunity walk,” instead of just talking about it.)

This post was adapted from my March 9, 2025 newsletter. Newsletters are delivered to subscribers’ in-boxes every Sunday morning, and sometimes I post the content from one of them here in lightly edited form. You can subscribe by leaving your email address in the column to the right.

Filed Under: *All Posts Tagged With: Ai, American dream, artificial intelligence, economic connectedness, economic opportunity, Lost Einsteins, Millken Center for Advancing the American Dream, powerful mapping tools, Raj Chetty, social capital, upward mobility

Will We Domesticate AI in Time?

February 9, 2024 By David Griesing Leave a Comment

As you know from holiday post-cards, I spent time recently with Emily and Joe in Salt Lake City. They were full, rich, activating days and I’m still sorting through the many gifts I received.

The immensity of the mountains in their corner of Utah was impossible to ignore, though I wasn’t there to ski them as much as to gaze-up in wonder at their snow-spattered crowns for the first time. Endless, low-level suburbs also extend to their foothills in every direction, and while everyone says “It’s beautiful here” they surely mean “the looking up” and not “the looking down.” 

Throughout my visit, SLC’s sprawl was a reminder of how far our built-environments fall short of our natural ones in a freedom-loving America that disdains anyone’s guidance on what to build and where.  Those who settled here have filled this impossibly majestic valley with an aimless, carmel-and-rosy low-rise jumble that from any elevation has a fog-topping of exhaust in the winter and (most likely) a mirage-y shimmer of vaporous particulates the rest of the year. 

So much for Luke 12:48 (“To whom much has been given….”)

Why not extrapolations of the original frontier towns I wondered instead of this undulating wave of discount centers, strip malls, and low-slung industrial and residential parking lots that have taken them over in every direction? 

I suppose they’re the tangible manifestation of resistance to governmental guidance (or really any kind of collective deliberation) on what we can or can’t, should or shouldn’t be doing when we create our homelands—leaving it to “the quick buck” instead of any consciously-developed, long-term vision to determine what surrounds us. 

Unfortunately I fear that this same deference to freedom (or perhaps more aptly, to its “free-market forces”) may be just as inevitable when it comes to artificial intelligence (or AI). So I have to ask: Instead of making far far less than we could from AI’s similarly awesome possibilities, why not commit to harnessing (and then nurturing) this breathtaking technology so we can achieve the fullest measure of its human-serving potential?  

Unfortunately, my wider days beneath a dazzle of mountain ranges showed me how impoverished this end game could also become.

Boris Eldagsen submitted this image, called “Pseudomnesia: The Electrician” to a recent, Sony world photography competition. When he won in the contest’s “creative open” category, he revealed that his image was AI-generated, going on to donate his prize money to charity. As Eldagsen said at the time: “Is the umbrella of photography large enough to invite AI images to enter—or would that be a mistake? With my refusal of the award, I hope to speed up this debate” about what is “real” and “acceptable” in the art world and what is not.

The debate over that and similar questions should probably begin with a summary appreciation of AI’s nearly-miraculous as well as fearsomely-catastrophic possibilities. Both were given a preview in a short interview with the so-called “Godfather of AI,” Geoffrey Hinton, on the 60 Minutes TV-newsmagazine a couple of months ago. 

Looking a bit like Dobby from the Harry Potter films, Hinton’s sense of calm and perspective offered as compelling a story as I’ve heard about the potential up- and down-sides of “the artificial neural networks” that he first helped to assemble five decades ago. Here are a few of its highlights:

  • after the inevitable rise of AI, humans will become the second most intelligent beings on Earth. For instance, in 5 years Hinton expects “that ChatGPT might well be able to reason better than us”;
  • AI systems can already understand. For example, even with the autocomplete features that interrupt us whenever we’re texting and emailing, the artificial intelligence that drives them has to “understand” what we’ve already typed as well as what we’re likely to add in order to offer up its suggestions;
  • Through trial and error, AI systems learn as they go (i.e. machine learning) so that the system’s “next” guess or recommendation is likely to be more accurate (or closer to what the user is looking for) than its “last” response. That means AI systems can improve their functioning without additional human intervention. Among other things, this capacity gives rise to fears that AI systems could gain certain advantages over or even come to dominate their human creators more generally as they continue to get smarter.
  • Hinton is proud of his contributions to AI-system development, especially the opportunities it opens in health care and in developing new drug-treatment protocols. But in addition to AI’s dominating its creators, he also fears for the millions of workers “who will no longer be valued” when AI systems take over their jobs, the even broader dissemination of “fake news” that will be turbo-charged by AI, as well as the use of AI-enabled warriors on tomorrow’s battlefields. Because of the speed of system advancements, he urges global leaders to face these challenges sooner rather than later. 

Finally, Hinton argues for broader experimentation and regulation of AI outside of the tech giants (like Microsoft, Meta and Google). Why? Because these companies’ primary interest is in monetizing a world-changing technology instead of maximizing its potential benefits for the sake of humanity. As you undoubtedly know, over the past two years many in the scientific-research, public-policy and governance communities have echoed Hinton’s concerns in widely-publicized “open letters” raising alarm over AI’s commercialization today.

Hinton’s to-do list is daunting, particularly at a time when many societies (including ours) are becoming more polarized over what constitutes “our common goods.” Maybe identifying a lodestar we could all aim for eagerly–like capitalizing on the known and (as yet unknown) promises of AI that can benefit us most–might help us to find some agreement as the bounty begins to materialize and we begin to wonder how to “spend” it. Seeing a bold and vivid future ahead of us (instead of merely the slog that comes from risk mitigation) might give us the momentum we lack today to start making more out of AI’s spectacular frontier instead of less. 

So what are the most thoughtful among us recommending in these regards? Because, once again, it will be easier to limit some of our freedoms around a new technology with tools like government regulation and oversight if we can also envision something that truly dazzles us at the end of the long, domesticating road.

Over the past several months, I’ve been following the conversation—alarm bells, recommended next steps, more alarm bells—pretty closely and it’s easy to get lost in the emotional appeals and conflicting agendas.  So I was drawn this week to the call-to-action in a short essay entitled “Why the U.S. Needs a Moonshot Mentality for AI—Led by the Public Sector.”  Its engaging appeal, co-authored by Fei-Fei Li and John Etchemendy at the Stanford Institute for Human-Centered Artificial Intelligence, is the most succinct and persuasive one I’ve encountered on what we should be doing now (and encouraging others with influence to be doing) if we want to shower ourselves with the full range of AI’s benefits while minimizing its risks.

Their essay begins with a review of the nascent legislative efforts that are currently underway in Congress to place reasonable guardrails around the most apparent of AI’s misguided uses.  A democratic government’s most essential function is to protect its citizens from those things (like foreign enemies during wartime) that only it can protect us from. AI poses that category of individual and national threat in terms of spreading disinformation, and the authors urge quick action on some combination of the pending legislative proposals.

Li and Etchemendy then talk about the parties that are largely missing from the research labs where AI is currently being developed.

As we’ve done this work, we have seen firsthand the growing gap in the capabilities of, and investment in, the public compared with private sectors when it comes to AI. As it stands now, academia and the public sector lack the computing power and resources necessary to achieve cutting edge breakthroughs in the application of AI.

This leaves the frontiers of AI solely in the hands of the most resourced players—industry and, in particular, Big Tech—and risks a brain drain from academia. Last year alone, less than 4o% of new Ph.D.s in AI went into academia and only 1% went into government jobs.

The authors are also justifiably concerned by the fact that policy makers in Washington have been listening, almost exclusively, to commercial AI developers like Sam Altman and Elon Musk and not enough to leaders from the academy and civil society. They are, if anything, even more outraged by the fact that “America’s longstanding history of creating public goods through science and technology” (think of innovations like the internet, GPS, MRIs) will be drowned out by the “increasingly hyperbolic rhetoric” that’s been coming out of the mouths of some “celebrity Silicon Valley CEOs” in recent memory.

They readily admit that “there’s nothing wrong with” corporations seeking profits from AI. The central problem is that those who might approach the technology “from a different [non-commercial] angle [simply] don’t have the [massive] computing power and resources to pursue their visions” that the profit-driven have today. It’s almost as if Li and Etchemendy want to level the playing field and introduce some competition between Big Tech and those who are interested (but currently at a disadvantage) in the academy and the public sector over who will be the first to produce the most significant “public goods” from AI.

Toward that end:

We also encourage an investment in human capital to bring more talent to the U.S. to work in the field of AI within academia and the government.

[W]hy does this matter? Because this technology isn’t just good for optimizing ad revenue for technology companies, but can fuel the next generation of scientific discovery, ranging from nuclear fusion to curing cancer.

Furthermore, to truly understand this technology, including its sometimes unpredictable emergent capabilities and behaviors, public-sector researchers urgently need to replicate and examine the under-the-hood architecture of these models. That’s why government research labs need to take a larger role in AI.

And last (but not least), government agencies (such as the National Institute of Standards and Technology) and academic institutions should play a leading role in providing trustworthy assessments and benchmarking of these advanced technologies, so the American public has a trusted source to learn what they can and can’t do. Big tech companies can’t be left to govern themselves, and it’s critical there is an outside body checking their progress.

Only the federal government can “galvanize the broad investment in AI” that produces a level-playing field where researchers within our academies and governmental bodies can compete with the brain trusts within our tech companies to produce the full harvest of public goods from a field like AI. In their eyes it will take competitive juices (like those unleashed by Sputnik which took America to the moon a little more than a decade later) to achieve AI’s true promise.  

If their argument peaks your interest like it did mine, there is a great deal of additional information on the Stanford Institute site where the authors profile their work and that of their colleagues. It includes a three-week, on-line program called AI4ALL where those who are eager to learn more can immerse themselves in lectures, hands-on research projects and mentoring activities; a description of the “Congressional bootcamp,” offered to representatives and their staffs last August and likely to be offered again; and the Institute’s white paper on building “a national AI resource” that will provide academic and non-profit researchers with the computing power and government datasets needed for both education and research.

To similar effect, I also recommend this June 12, 2023 essay in Foreign Policy. It covers some of the same territory as these Stanford researchers and similarly urges legislators to begin to “reframe the AI debate from one about public regulation to one about public development.”

It doesn’t take much to create a viral sensation, but when they were published these AI-generated images certainly created one. Here’s the short story behind “Alligator-Pow” and “-Pizza.” At some point in the future, we could look back to the olden days when AI’s primary contributions were to make us laugh or to help us to finish our text messages.

Because we’ll (hopefully) be reminiscing in a future when AI’s bounty has already changed us in far more profound and life-affirming ways.

If the waves of settlers in Salt Lake City had believed that they could build something that aspired to the grandeur of the mountains around them—like the cathedrals of the Middle Ages or even some continuation of the lovely and livable villages that many of them had left behind in Northern Europe—they might not have “paved Paradise and put up a parking lot” (as one of their California neighbors once sang).

In similar ways, having a worthy vision today, and one that’s realized by the right gathering of competitors, could make the necessary difference when it comes to artificial intelligence?

So will we domesticate AI in time? 

Only if we can gain enough vision to take us over the “risk” and “opportunity” hurdles that are inhibiting us today. 

This post was adapted from my January 7, 2024 newsletter. Newsletters are delivered to subscribers’ in-boxes every Sunday morning, and sometimes I post the content from one of them here. You can subscribe (and not miss any) by leaving your email address in the column to the right.

Filed Under: *All Posts Tagged With: Ai, artificial intelligenc, Fei-Fei Li, Geoffrey Hinton, John Etchemendy, making the most out of an opportunity, Stanford Institute for Human-Centered Artificlal Intelligence

Citizens Will Decide What’s Important in Smart Cities

July 8, 2019 By David Griesing Leave a Comment

The norms that dictate the acceptable use of artificial intelligence in technology are in flux. That’s partly because the AI-enabled, personal data gathering by companies like Google, Facebook and Amazon has caused a spirited debate about the right of privacy that individuals have over their personal information. With your “behavioral” data, the tech giants can target you with specific products, influence your political views, manipulate you into spending more time on their platforms, and weaken the control that you have over your own decision-making.
 
In most of the debate about the harms of these platforms thus far, our privacy rights have been poorly understood.  In fact, our anything-but-clear commitments to the integrity of our personal information have enabled these tech giants to overwhelm our initial, instinctive caution as they seduced us into believing that “free” searches, social networks or next day deliveries might be worth giving them our personal data in return. Moreover, what alternatives did we have to the exchange they were offering?

  • Where were the privacy-protecting search engines, social networks and on-line shopping hubs?
  • Moreover, once we got hooked on to these data-sucking platforms, wasn’t it already too late to “put the ketchup back in the bottle” where our private information was concerned? Don’t these companies (and the data brokers that enrich them) already have everything that they need to know about us?

Overwhelmed by the draw of  “free” services from these tech giants, we never bothered to define the scope of the privacy rights that we relinquished when we accepted their “terms of service.”  Now, several years into this brave new world of surveillance and manipulation, many feel that it’s already too late to do anything, and even if it weren’t, we are hardly willing to relinquish the advantages of these platforms when they are unavailable elsewhere. 
 
So is there really “no way out”?  
 
A rising crescendo of voices is gradually finding a way, and they are coming at it from several different directions.
 
In places like Toronto (London, Helsinki, Chicago and Barcelona) policy makers and citizens alike are defining the norms around personal data privacy at the same time that they’re grappling with the potential fallout of similar data-tracking, analyzing and decision-making technologies in smart-city initiatives.
 
Our first stop today is to eavesdrop on how these cities are grappling with both the advantages and harms of smart-city technologies, and how we’re all learning—from the host of scenarios they’re considering—why it makes sense to shield our personal data from those who seek to profit from it.  The rising debate around smart-city initiatives is giving us new perspectives on how surveillance-based technologies are likely to impact our daily lives and work. As the risks to our privacy are played out in new, easy-to-imagine contexts, more of us will become more willing to protect our personal information from those who could turn it against us in the future.
 
How and why norms change (and even explode) during civic conversations like this is a topic that Cass Sunstein explores in his new book How Change Happens. Sunstein considers the personal impacts when norms involving issues like data privacy are in flux, and the role that understanding other people’s priorities always seems to play. Some of his conclusions are also discussed below. As “dataveillance” is increasingly challenged and we contextualize our privacy interests even further, the smart-city debate is likely to usher in a more durable norm regarding data privacy while, at the same time, allowing us to realize the benefits of AI-driven technologies that can improve urban efficiency, convenience and quality of life.
 
With the growing certainty that our personal privacy rights are worth protecting, it is perhaps no coincidence that there are new companies on the horizon that promise to provide access to the on-line services we’ve come to expect without our having to pay an unacceptable price for them.  Next week, I’ll be sharing perhaps the most promising of these new business models with you as we begin to imagine a future that safeguards instead of exploits our personal information. 

1.         Smart-City Debates Are Telling Us Why Our Personal Data Needs Protecting

Over the past 6 months, I’ve talked repeatedly about smart-city technologies and one of you reached out to me this week wondering:  “What (exactly) are these new “technologies”?”  (Thanks for your question, George!).  
 
As a general matter, smart-city technologies gather and analyze information about how a city functions, while improving urban decision-making around that new information. Throughout, these data-gathering,  analyzing, and decision-making processes rely on artificial intelligence. In his recent article “What Would It Take to Help Cities Innovate Responsibly With AI?” Eddie Copeland begins by describing the many useful things that AI enables us to do in this context: 

AI can codify [a] best practice and roll it out at scale, remove human bias, enable evidence-based decision making in the field, spot patterns that humans can’t see, optimise systems too complex for humans to model, quickly digest and interpret vast quantities of data and automate demanding cognitive activities.

In other words, in a broad range of urban contexts, a smart-city system with AI capabilities can make progressively better decisions about nearly every aspect of a city’s operations by gaining an increasingly refined understanding of how its citizens use the city and are, in turn, served by its managers.
 
Of course, the potential benefits of greater or more equitable access to city services as well as their optimized delivery are enormous. Despite some of the current hew and cry, a smart-cities future does not have to resemble Big Brother. Instead, it could liberate time and money that’s currently being wasted, permitting their reinvestment into areas that produce a wider variety of benefits to citizens at every level of government.
 
Over the past weeks and months, I’ve been extolling the optimism that drove Toronto to launch its smart-cities initiative called Quayside and how its debate has entered a stormy patch more recently. Amidst the finger pointing among Google affiliate Sidewalk Labs, government leaders and civil rights advocates, Sidewalk (which is providing the AI-driven tech interface) has consistently stated that no citizen-specific data it collects will be sold, but the devil (as they say) remains in the as-yet to be disclosed details. This is from a statement the company issued in April:

Sidewalk Labs is strongly committed to the protection and privacy of urban data. In fact, we’ve been clear in our belief that decisions about the collection and use of urban data should be up to an independent data trust, which we are proposing for the Quayside project. This organization would be run by an independent third party in partnership with the government and ensure urban data is only used in ways that benefit the community, protect privacy, and spur innovation and investment. This independent body would have full oversight over Quayside. Sidewalk Labs fully supports a robust and healthy discussion regarding privacy, data ownership, and governance. But this debate must be rooted in fact, not fiction and fear-mongering.

As a result of experiences like Toronto’s (and many others, where a new technology is introduced to unsuspecting users), I argued in last week’s post for longer “public ventilation periods” to understand the risks as well as rewards before potentially transformative products are launched and actually used by the public.
 
In the meantime, other cities have also been engaging their citizens in just this kind of information-sharing and debate. Last week, a piece in the New York Times elaborated on citizen-oriented initiatives in Chicago and Barcelona after noting that:

[t]he way to create cities that everyone can traverse without fear of surveillance and exploitation is to democratize the development and control of smart city technology.

While Chicago was developing a project to install hundreds of sensors throughout the city to track air quality, traffic and temperature, it also held public meetings and released policy drafts to promote a City-wide discussion on how to protect personal privacy. According to the Times, this exchange shaped policies that reduced, among other things, the amount of footage that monitoring cameras retained. For its part, Barcelona has modified its municipal procurement contracts with smart cities technology vendors to announce its intentions up front about the public’s ownership and control of personal data.
 
Earlier this year, London and Helsinki announced a collaboration that would enable them to share “best practices and expertise” as they develop their own smart-city systems. A statement by one driver of this collaboration, Smart London, provides the rationale for a robust public exchange:

The successful application of AI in cities relies on the confidence of the citizens it serves.
 
Decisions made by city governments will often be weightier than those in the consumer sphere, and the consequences of those decisions will often have a deep impact on citizens’ lives.
 
Fundamentally, cities operate under a democratic mandate, so the use of technology in public services should operate under the same principles of accountability, transparency and citizens’ rights and safety — just as in other work we do.

To create “an ethical framework for public servants and [a] line-of-sight for the city leaders,” Smart London proposed that citizens, subject matter experts, and civic leaders should all ask and vigorously debate the answers to the following 10 questions:

  • Objective– why is the AI needed and what outcomes is it intended to enable?
  • Use– in what processes and circumstances is the AI appropriate to be used?
  • Impacts– what impacts, good and bad, could the use of AI have on people?
  • Assumptions– what assumptions is the AI based on, and what are their iterations and potential biases?
  •  Data– what data is/was the AI trained on and what are their iterations and potential biases?
  • Inputs– what new data does the AI use when making decisions?
  • Mitigation– what actions have been taken to regulate the negative impacts that could result from the AI’s limitations and potential biases?
  • Ethics– what assessment has been made of the ethics of using this AI? In other words, does the AI serve important, citizen-driven needs as we currently understand those priorities?
  • Oversight– what human judgment is needed before acting on the AI’s output and who is responsible for ensuring its proper use?
  • Evaluation– how and by what criteria will the effectiveness of the AI in this smart-city system be assessed and by whom?

As stakeholders debate these questions and answers, smart-city technologies with broad-based support will be implemented while citizens gain a greater appreciation of the privacy boundaries they are protecting.
 
Eddie Copeland, who described the advantages of smart-city technology above, also urges that steps beyond a city-wide Q&A be undertaken to increase the awareness of what’s at stake and enlist the public’s engagement in the monitoring of these systems.  He argues that democratic methods or processes need to be established to determine whether AI-related approaches are likely to solve a specific problem a city faces; that the right people need to be assembled and involved in the decision-making regarding all smart-city systems; and that this group needs to develop and apply new skills, attitudes and mind-sets to ensure that these technologies maintain their citizen-oriented focus. 
 
As I argued last week, the initial ventilation process takes a long, hard time. Moreover, it is difficult (and maybe impossible) to conduct if negotiations with the technology vendor are on-going or that vendor is “on the clock.”
 
Democracy should have the space and time to be a proactive instead of reactive whenever transformational tech-driven opportunities are presented to the public.

(AP Photo/David Goldman)

2.         A Community’s Conversation Helps Norms to Evolve, One Citizen at a Time

I started this post with the observation that many (if not most) of us initially felt that it was acceptable to trade access to our personal data if the companies that wanted it were providing platforms that offered new kinds of enjoyment or convenience. Many still think it’s an acceptable trade. But over the past several years, as privacy advocates have become more vocal, leading jurisdictions have begun to enact data-privacy laws, and Facebook has been criticized for enabling Russian interference in the 2016 election and the genocide in Myanmar, how we view this trade-off has begun to change.  
 
In a chapter of his new book How Change Happens, legal scholar Cass Sunstein argues that these kinds of widely-seen developments:

can have a crucial and even transformative signaling effect, offering people information about what others think. If people hear the signal, norms may shift, because people are influenced by what they think other people think.

Sunstein describes what happens next as an “unleashing” process where people who never formed a full-blown preference on an issue like “personal data privacy (or were simply reluctant to express it because the trade-offs for “free” platforms seemed acceptable to everybody else), now become more comfortable giving voice to their original qualms. In support, he cites a remarkable study about how a norm that gave Saudi Arabian husbands decision-making power over their wives’ work-lives suddenly began to change when actual preferences became more widely known.

In that country, there remains a custom of “guardianship,” by which husbands are allowed to have the final word on whether their wives work outside the home. The overwhelming majority of young married men are privately in favor of female labor force participation. But those men are profoundly mistaken about the social norm; they think that other, similar men do not want women to join the labor force. When researchers randomly corrected those young men’s beliefs about what other young men believed, they became far more willing to let their wives work. The result was a significant impact on what women actually did. A full four months after the intervention, the wives of men in the experiment were more likely to have applied and interviewed for a job.

When more people either speak up about their preferences or are told that others’ inclinations are similar to theirs, the prevailing norm begins to change.
 
A robust, democratic process that debates the advantages and risks of AI-driven, smart city technologies will likely have the same change-inducing effect. The prevailing norm that finds it acceptable to exchange our behavioral data for “free” tech platforms will no longer be as acceptable as it once was. The more we ask the right questions about smart-city technologies and the longer we grapple as communities with the acceptable answers, the faster the prevailing norm governing personal data privacy will evolve.  
 
Our good work of citizens is to become more knowledgeable about the issues and to champion what is important to us in dialogue with the people who live and work along side of us. More grounds for protecting our personal information are coming out of the smart-cities debate and we are already deciding where new privacy lines should be drawn around us. 

This post was adapted from my July 7, 2019 newsletter. When you subscribe, a new newsletter/post will be delivered to your inbox every Sunday morning.

Filed Under: *All Posts, Being Part of Something Bigger than Yourself, Building Your Values into Your Work, Continuous Learning Tagged With: Ai, artificial intelligence, Cass Sunstein, dataveillance, democracy, how change happens, norms, personal data brokers, personal privacy, privacy, Quayside, Sidewalk Labs, smart cities, Smart City, surveillance capitalism, Toronto, values

New Starting Blocks for the Future of Work

March 10, 2019 By David Griesing Leave a Comment

(picture by Edson Chagas)

As a challenging future rushes towards us, I often wonder whether our democratic values will continue to provide a sound enough foundation for our lives and work.
 
In many ways, this “white-water world” is already here. As framed by John Seely Brown in a post last summer, it confronts us with knowledge that’s simply “too big to know” and a globe-spanning web of interconnections that seems to constantly alter what’s in front of us, like the shifting views of a kaleidoscope.
 
It’s a brave new world that:

– makes a fool out of the concept of mastery in all areas except our ability–or inability–to navigate [its] turbulent waters successfully;
 
– requires that we work in more playful and less pre-determined ways in an effort to keep up with the pace of change and harness it for a good purpose;
 
– demands workplaces where the process of learning allows the tinkerer in all of us “to feel safe” from getting it wrong until we begin to get it right;
 
– calls on us to treat technology as a toolbox for serving human needs as opposed to the needs of states and corporations alone;  and finally,
 
– requires us to set aside time for reflection “outside of the flux” so that we can consider the right and wrong of where we’re headed, commit to what we value, and return to declare those values in the rough and tumble of our work tomorrow.

In the face of these demands, the most straightforward question is whether we will be able to safeguard our personal wellbeing and continue to enjoy a prosperous way of life. Unfortunately, neither of these objectives seems as readily attainable as they once did.
 
When our democratic values (such as freedom and championing individual rights) no longer ensure our wellbeing and prosperity, those values get questioned and eventually challenged in our politics.
 
Last week, I wrote here about the dangerous risks—like addiction and behavioral modification—that our kids and others confront by spending too much screen time playing on-line games like Fortnite. Despite a crescendo of anecdotal evidence about the harms to boys in particular, the freedom-loving (and endlessly distracted) West seems stymied when it comes to deciding what to do about it. On the other hand, China easily moved from identifying the harm to its collective wellbeing to implementing time restrictions on the amount of on-line play. It was the Great Firewall’s ability to intervene quickly that prompted one observer to wonder how those of us in the so-called “first world” will respond to  “the spectacle of a civilisation founded [like China’s] on a very different package of values — but one that can legitimately claim to promote human flourishing more vigorously than their own”?
 
Meanwhile, in a Wall Street Journal essay last weekend, its authors documented the ability of authoritarian countries with capitalist economies to raise the level of prosperity enjoyed by their citizens in recent years. Not so long ago, the allure of West to the “second” and “third worlds” was that prosperity seemed to go hand-in-hand with democratic values and institutions. That conclusion is far less clear today. With rising prosperity in authoritarian nations like China and Vietnam—and the likelihood that there will soon be far more prosperous citizens in these countries than outside of them—the authors fretted that:

It isn’t clear how well democracy, without every material advantage on its side, will fare in the competition [between our very different value systems.]

With growing uncertainty about whether Western values and institutions can produce sufficient benefits for its citizens, and with “the white-water world” where we live and work challenging our navigational skills, it seems a good time to return to some questions that we’ve chewed on here before about “how we can best get ready for the challenges ahead of us.” 
 
Can the ways that we educate our kids (and retrain ourselves) enable us to proclaim our humanity, secure our self-worth, and continue to find a valued place for ourselves in the increasingly complex world of work? 
 
Can championing new teaching methods strengthen democratic values and deliver more of their promise to us in terms of wellbeing and prosperity than it seems we can count on today?
 
Are new and different classrooms the keys to our futures?

1.         You Treasure What You Measure

Until this week, I never considered that widely administered education tests would provide any of these answers—but I probably should have—because in a very real way, we treasure the aptitudes and skills, indeed everything that we take the time to measure. Gross national product, budget and trade deficits, unemployment rates, the 1% versus everyone else: what is most important to us is endlessly calculated, publicized and analyzed. We also value these measures because they help us decide what to do next, like stimulating the economy, cutting government programs, or implementing trade restrictions. Measures influence actions.
 
It’s much the same with the measures we obtain from the educational tests that we administer, and in this regard, no test today is more influential than the Programme for International Student Assessment or PISA. PISA was first given in 2000 in 32 countries, the first time that national education systems were evaluated and could be compared with one another. The test measured 15-year-olds scholastic performance in mathematics, science and reading. No doubt you’ve heard some of the results, including the United States’ disappointing placement in the middle of the international pack. The test is given every three years and in 2018, 79 countries and economies participated in the testing and data collection.
 
According to an article in on-line business journal Quartz this week, “the results…are studied by educators the way soccer fans obsess over the World Cup draw.” 
 
No one thinks more about the power of the PISA test, the information that it generates, and what additional feats it might accomplish than Andreas Schleicher, a German data scientist who heads the education division of the Organisation for Economic Cooperation and Development (OECD) which administers PISA worldwide.

Andreas Schleicher

Schleicher downplays the role that the PISA has played in shaming low performing countries, preferring the test’s role in mobilizing national leaders to care as much about teaching and learning as they do about economic measures like unemployment rates and workplace productivity. At the most basic level, PISA data has supported a range of conclusions, including that class size seems largely irrelevant to the learning experience and that what matters most in the classroom is “the quality of teachers, who need to be intellectually challenged, trusted, and have room for professional growth.”

Schleicher also views the PISA as a tool for liberating the world’s educational systems from their single-minded focus on subjects like science, reading and math and towards the kinds of “complex, interdisciplinary skills and mindsets” that are necessary for success in the future of work. We are afraid that human jobs will be automated but we are still teaching people to think like machines. “What we know is that the kinds of things that are easy to teach, and maybe easy to test, are precisely the kinds of things that are easy to digitize and to automate,” Schleicher says.

To help steer global education out of this rut, he has pushed for the design and administration of new, optional tests that complement the PISA. Change the parameters of the test, change the skills that are measured, and maybe the world’s education-based priorities will change too. Says Schleicher: “[t]he advent of AI [or artificial intelligence] should push us to think harder [about] what makes us human” and lead us to teach to those qualities, adding that if we are not careful, the world’s nations will be continue to educate “second-class robots and not first-class humans.”

Schleicher had this future-oriented focus years before the PISA was initially administered.

In 1997, Schleicher convened a group of representatives from OECD countries, not to discuss what could be tested, but what should be tested. The idea was to move beyond thinking about education as the driver of purely economic outcomes. In addition to wanting a country’s education system to provide a ready workforce, they also wondered whether they could nurture young people to help to make their societies more cohesive and democratic while reducing unfairness and inequality. According to Quartz:

The group identified three areas to explore: relational, or how we get along with others; self, how we regulate our emotions and motivate ourselves, and content, what schools need to teach.

Instead of simply enabling students to respond to the demands of a challenging world, Schleicher and others in his group wanted national testing to encourage the kinds of skill building that would enable young people to change the world they’d be entering for the better.   

Towards this end, Schleicher’s team began to develop assessments for independent thinking and the kinds of personal skills that contribute to it. The technology around test administration enabled the testers to see how students solved problems in real time, not simply whether they get them right or wrong. They gathered and shared data that enabled national education systems to “help students learn better and teachers teach better and schools to become more effective.”  Assessments of the skill sets around independent thinking encouraged countries to begin to see new possibilities and want to change how students learn in their classrooms. “If you don’t have a north star [like this], perhaps you limit your vision,” he says.

For the past twenty years, Schleicher’s north stars have also included students’ quest to find meaning in what they are doing and to exercise their agency in determining what and how they learn. He is convinced that people have the “capacity to imagine and build things of intrinsic positive worth.”  We have skills that robots cannot replace, like managing between black and white, integrating knowledge, and applying knowledge in unique situations. All of those skills can be tested (and encouraged), along with the skill that is most unique about human beings, namely:

our capacity to take responsibility, to mobilize our cognitive and social and emotional resources to do something that is of benefit to society. 

What Schleicher and his testing visionaries began to imagine in 1997 have been gradually introduced as optional tests that focus on problem-solving, collaborative problem-solving, and most recently, so-called “global competencies” like open-mindedness and the desire to improve the world. In 2021, another optional test will assess flexibility in thinking and habits of creativity, like being inquisitive and persistent.

One knowledgeable observer of these initiatives, Douglas Archibald, credits Schleicher with “dramatically elevating” the discussion about the future of education. “There is no one else bringing together people in charge of these educational systems to seriously think about how their systems [can be] future proofed,” says Archibald. But he and others also see a hard road ahead for Schleicher, with plenty of resistance from within the global education community.   

Some claim that he is asking more from a test than he should. Others claim his emphasis is fostering an over-reliance on testing over other priorities. Regarding the “global competencies” assessment for example, 40 of the 79 participating countries opted not to administer it. But Schleicher, much like visionaries in other fields, remains undaunted. Nearly half of the countries are exercising their option to assess “global competencies” and even more are administering the other optional tests that Schleicher has helped develop. Maybe educators are slowly becoming convinced that the threat to human work in a white-water world is too serious to be ignored any longer.

A view from Kenneth Robinson’s presentation: “Changing Education Paradigms”

While Schleicher and his allies are in the vanguard of those who are using a test to prompt a revolution in education, they are hardly the only ones to challenge a teaching model that, for far too long, has only sought to produce a dependable, efficient and easily replaceable workforce. The slide above is from Sir Kenneth Robinson’s much-heralded (and well-worth your taking a look at) 2010 video called “Changing Education Paradigms.” In it, he also champions teaching that enables uniquely human contributions that no machine can ever replace.
 
Schleicher, Robinson and others envision education systems that prepare young people (or re-engineer older ones) for a complex and ever shifting world where no one has to be overwhelmed by the glut of information or the dynamics of shifting networks but can learn how to navigate today’s challenges productively. They highlight and, by doing so, champion teaching methods that help to prepare all of us for jobs that provide meaning and a sense of wellbeing while amplifying and valuing our uniquely human contributions.

Schleicher is also helping to modify our behavior by championing skills like curiosity about others and empathy that can make us more engaged members of our communities and commit us to improving them. Assessing these skills in national education tests says both loudly and clearly that these skills are important for human flourishing too. Indeed, this may be Schleicher’s and OECD’s most significant contribution. Their international testing is encouraging the skills and changes in behavior that can build better societies, whether they are based on the democratic values of the West or the more collective and less individual ones of the East. 

That is no small thing. No small thing at all.

This post is adapted from my March 10, 2019 newsletter.

Filed Under: *All Posts, Being Part of Something Bigger than Yourself, Building Your Values into Your Work, Continuous Learning Tagged With: Ai, Andreas Schleicher, artificial intelligence, automation, democratic values, education, education testing, human flourishing, human work, OECD, PISA, Programme for International Student Assessment, skills assessment, values, work, workforce preparation

  • 1
  • 2
  • Next Page »

About David

David Griesing (@worklifeward) writes from Philadelphia.

Read More →

Subscribe to my Newsletter

Join all the others who have new posts, recommendations and links to explore delivered to their inboxes every week. Please subscribe below.

David Griesing Twitter @worklifereward

My Forthcoming Book

WordLifeReward Book

Search this Site

Recent Posts

  • Great Design Invites Delight, Awe June 4, 2025
  • Liberating Trump’s Good Instincts From the Rest April 21, 2025
  • Delivering the American Dream More Reliably March 30, 2025
  • A Place That Looks Death in the Face, and Keeps Living March 1, 2025
  • Too Many Boys & Men Failing to Launch February 19, 2025

Follow Me

David Griesing Twitter @worklifereward

Copyright © 2025 David Griesing. All Rights Reserved.

  • Terms of Use
  • Privacy Policy