David Griesing | Work Life Reward Author | Philadelphia

  • Blog
  • About
    • Biography
    • Teaching and Training
  • Book
    • WorkLifeReward
  • Newsletter Archive
  • Contact
You are here: Home / Archives for ChatGPT

Using AI to Help Produce Independent, Creative & Resilient Adults in the Classroom

September 10, 2025 By David Griesing Leave a Comment

I learned something that made me smile this week. 

An innovator at the leading edge of American education and technology (or “ed-tech”) named Steve Hargadon picked up on a thesis I’d advanced some time ago in “The Amish Test & Tame New Technologies Before Adopting Them & We Can Learn How to Safeguard What’s Important to Us Too” and applied it to the use of AI in our classrooms today.

For both better and worse, we’ve let marketers like Google (in search), Facebook (in social media), and Apple (in smart phones) decide how we integrate their products into our lives—usually by dropping them on top of us and letting each new user “figure it out.”

But instead of being left with the hidden costs to our privacy, attention and mental health, we could decide how to maximize their benefits and limit their likely harms before we get hooked on these products, the types of assessments that groups like the Amish always undertake —as another innovator in this space (Wired co-founder Kevin Kelly) noted several years ago.

To further the discussion about our use of technology generally and ed-tech in particular, I’ll briefly review the conclusions in my Test & Tame post and summarize a more recent one (“Will AI Make Us Think Less or Think Better”), before considering Hargadon’s spot-on proposals in “Intentional Education with AI: The Amish Test and Generative Teaching.”

The traditional Pennsylvania Amish work their farms and small businesses at a surprisingly short distance from Philadelphia. When I venture out of town for an outing it’s often to Lancaster County, where the car I’m in is quickly cheek-to-jowl with a horse and buggy, and families hang their freshly washed clothes on lines extending from back doors instead of cramming them into drying machines. It’s hard to miss their strikingly different take on the “modern conveniences” that benefit as well as burden the rest of us. What Kelly and others pointed out was that the Amish manage their use of marvels like cars or dryers as a community, instead of as individuals.

I described the difference this way:

As consumers, we feel entitled to make decisions about tech adoption on our own, not wishing to be told by anybody that ‘we can buy this but can’t buy that,’ let alone by authorities in our communities who are supposedly keeping ‘what’s good for us’ in mind. Not only do we reject a gatekeeper between us and our ‘Buy’ buttons, there is also no Consumer Reports that assesses the potential harms of [new] technologies to our autonomy as decision-makers, our privacy as individuals, or our democratic way of life — no resource that warns us ‘to hold off’ until we can weigh the long-term risks against the short-term rewards. As a result, we defend our unfettered freedom until we start discovering just how terrible our freedom can be.

By contrast, the Amish hold elaborate “courtship rituals” with a new technology before deciding to embrace some or all of its features—for example sharing use of the internet in a device that all can access when its needed INSTEAD OF owning your personal access via a smart phone you keep in your pocket. They reach a consensus like this from extensive testing of smart phone use & social media access within their community, appreciating over time its risks in terms of “paying attention” generally, or “self-esteem among the young” more particularly, before a gatekeeper (like a bishop) decides what, if any, accommodation with these innovations seems “good” for all.

The community’s most important values are key to arriving at this “testing & taming” consensus. The Amish openly question whether the next innovation will strengthen family and community bonds or cause users to abandon them. They wonder about things as “small & local” as whether a new technology will enable them to continue to have every meal of the day with their families, which is important to them. And they ask whether a phone or social media platform will increase or decrease the quality of family time together, perhaps their highest priority. The Amish make tech use conform to their values, or they take a pass on its use altogether. As a result,  

the Amish are never going to wake up one day and discover that a generation of their teenagers has become addicted to video games; that smartphones have reduced everyone’s attention span to the next externally-generated prompt; or that surveillance capitalism has ‘suddenly’ reduced their ability to make decisions for themselves as citizens, shoppers, parents or young people.

When I considered Artificial Intelligence’s impacts on learning last month, I didn’t filter the pros & cons through any community’s moral lens, as in: what would most Americans say is good for their child to learn and how does AI advance or frustrate those priorities? Instead, I merely juxtaposed one of the primary concerns about AI-driven chatbots in the classroom with one of their most promising up-sides. On the one hand, when an AI tool like ChatGPT effectively replaces a kid’s thinking with its own, that kid’s ability to process information and think critically quickly begins to atrophy. On the other hand, when resource-rich AI tutors are tailored to students’ particular learning styles, we’re discovering that these students “learned more than twice as much as when they engaged with the same content during [a] lecture…with personalized pacing being a key driver of success.”

We’re also realizing that giving students greater control over their learning experience through “personalized on-demand design” has:

allowed them to ask as many questions as they wished and address their personal points of confusion in a short period of time. Self-pacing meant that students could spend more time on concepts they found challenging and move quickly through material they understood, leading to more efficient learning….

Early experience with Ai-tutors has also changed the teacher’s role in the classroom. While individualized tutoring by chat-bots will liberate teachers to spend more time motivating students and being supportive when frustrations arise, 

our educators will also need to supervise, tweak and even design new tutorials. Like the algorithms that adapt while absorbing new data, they will need to continuously modify their interventions to meet the need of their students and maximize the educational benefits.

Admittedly, whether America’s teachers can evolve into supervisors and coaches of AI-driven learning in their classrooms—to in some ways, become “even smarter than [these] machines”— is a question “that will only be answered over time.”

Meanwhile, Steve Hargadon asks an even more fundamental question in his recent essay. Like the Amish, he wonders:

What is our most important priority for American students today, and how can these new, AI capabilities help us to produce the adutls that we want and that an evolving American community demands?

In what I call his “foggy window paintings,” photographer Jochen Muhlenbrink
finds the clarity through the condensation (here and above). I borrowed his inspiration
in a photo that I took of our back door one night a few years back (below).

Hargadon begins by acknowledging a lack of consensus in the American educational community, which startled me initially but which I (sadly) came to realize is all-too-true.

Unlike most Amish communities, American education is “a community of educators, students and stake-holders” in only the loosest sense. It’s also an old community, set in its ways, particularly when it comes to public education (or “the educating” that our tax dollars pay for). Writes Hargadon:

Here’s an uncomfortable truth: traditional schooling, despite promises of liberating young minds, has always excelled more at training compliance than fostering independent thinking. While we often claim otherwise, it’s largely designed to create standardized workers, not creative thinkers.

Unless we acknowledge this reality, we’ll miss what’s really at stake with AI adoption. Unexamined AI use in an unexamined education system will amplify these existing flaws, producing students who are even less self-directed and capable. The temptation for quick AI-generated answers, rather than wrestling with complex problems, threatens the very traits we want in our future adults: curiosity, agency, and resilience. (emphasis added)

If we examine the American education system and consider it as a kind of community, it quickly becomes apparent that it’s a much more diverse and divided in terms of its priorities than the Amish.

Moreover, because non-Amish Americans often seem to love their individual freedoms (including choosing “what’s good” for their children), more than the commitments they share (or what’s best for all), the American educational community has often seemed reluctant, if not resistant, to accepting the guidance or governance of a gate-keeper in their classrooms.

So while some of us prefer tech-tools that get students to the right answers (or at least the answers we’ll test for later), others prefer fostering a vigorous thinking process wherever it might lead. 

Hargadon, along with me and the AI-tutor developers I wrote about in July clearly prefer what he calls “generative teaching,” or building the curious and resilient free-agents that we want our future adults to be. So let’s assume that we can gather the necessary consensus around this approach—if not for the flourishing of our children generally, but because an increasingly automated job market demands curiosity, resilience and agency for the jobs that will remain. Then “the Amish test” can be put into practice when evaluating AI-tools in the classroom.

Instead of asking: Will this make teaching easier [for the teachers]?
Ask: Will this help students become more creative, self-directed, and capable of independent thought?

Instead of asking: Does this improve test scores?
Ask: Does this foster the character traits and thinking skills our students will need as adults?

With their priorities clear, parents and students (along with American education’s many other stakeholders) would now have a “test” or “standard” with which to judge AI-driven technologies. Do they further what we value most, or divert us from a goal that we finally share?

To this dynamic, Hargadon adds a critical insight. While I mentioned the evolving role of today’s teachers in the use of these tools, he proposes “teaching the framework” to students as well. 

Help students apply their own Amish Test to AI tools. This metacognitive skill—thinking about how they think and learn—may be more valuable than any specific technology…

[By doing so,] students learn to direct technology rather than be directed by it. They develop the discernment to ask: ‘How can I use this tool to become a better thinker, not just get faster answers?

When this aptitude finally becomes engrained in our nation’s classrooms, it may (at last) enable Americans to decide what is most important to us as a country—the commitments that bind us to one another, and not just the freedoms that we share—so we can finally start testing & taming our next transformational technology on how it might unify the American people instead of divide us.

For the past 4 months, I’ve been reporting on the state of American democracy’s checks & balances because nothing should be more important to our work as citizens than the continuing resilience of our democratic institutions. And while I assumed there might be some “wind-down” in executive orders and other actions by the Trump White House in the last few weeks of August, the onslaught in areas big & small continued to challenge our ability to respond to each of them in any kind of thoughtful way.

Other than mentioning this week’s bombing of an unidentified vessel in the the Gulf of Mexico; threat of troops to Chicago, Baltimore and New Orleans; turmoil at the Centers for Disease Control; immigration raid on a massive EV plant in Georgia; more urging that Gaza should be turned into the next Riviera; the president’s design of a new White House ballroom; and Vladimir Putin’s repudiation of America’s most recent deadline on Ukraine, Trump’s leadership today faces 2 crossroads that may be even more worthy of your consideration.

At the Department of Labor in Washington D.C. this week

1.    What we’re seeing & hearing is either a fantasy or a preview.

In a subscriber newsletter from the New York Times this week, columnist Jamelle Bouie writes:

The administration-produced imagery in Washington is… a projection of sorts — a representation of what the president wants reality to be, drawn from its idea of what authoritarianism looks like. The banners and the troops — not to mention the strangely sycophantic cabinet meetings and news conferences — are a secondhand reproduction of the strongman aesthetic of other strongman states. It is as if the administration is building a simulacrum of authoritarianism, albeit one meant to bring the real thing into being. No, the United States is not a totalitarian state led by a sovereign Donald Trump — a continental Trump Organization backed by the world’s largest nuclear arsenal — but his favored imagery reflects his desire to live in this fantasy.

The spectacle that falsifies reality is nevertheless a real product of that reality, while lived reality is materially invaded by the contemplation of the spectacle and ends up absorbing it and aligning itself with it,’ the French social theorist Guy Debord wrote in his 1967 treatise ‘The Society of the Spectacle,’ a work that feels especially relevant in an age in which mass politics is as much a contest to construct meaning as it is to decide the distribution of material goods.

2.    Trump seems to be dealing with everything but “pocketbook issues”—or (in James Carville’s famous words during the 1992 presidential election), “It’s the economy, stupid.”

This week, the Wall Street Journal reported that after trending up in June and July, consumer sentiment dropped nearly 6% in August according to the University of Michigan’s “closely watched” economic sentiment survey. “More U.S. consumers now say they’re dialing down spending than when inflation spiked in 2022,” the article says. “Over 70% of people surveyed from May to July plan to tighten their budgets for items with large price increases in the year ahead….”

In a rejoinder, columnist Karl Rove mentioned a new WSJ/National Opinion Research Center poll that shows “voters are sour about their circumstances and pessimistic about the future.” As we head into the fall and towards the mid-terms next year, Rove opines: “It’ll take a lot more than happy talk” to counter these impressions. “People must see positive results when they shop, fuel up their cars, deposit paychecks and glance at their retirement accounts.”

As of this week, there is no sign that any plan for economic stability or growth is on the horizon, forecasting even more contentious, unsettling & expensive times ahead.

This post was adapted from my September 7, 2025 newsletter. Newsletters are delivered to subscribers’ in-boxes every Sunday morning, and sometimes I post the content from one of them here, in lightly edited form. You can subscribe by leaving your email address in the column to the right.

Filed Under: *All Posts Tagged With: Ai, AI-tutor, Amish, Amish Test & Tame New Technologies, artificial intelligence, chatbots, ChatGPT, Kevin Kelly, Steve Hargadon

Will AI Make Us Think Less or Think Better?

July 26, 2025 By David Griesing 2 Comments

Holding two opposing thoughts in your mind at the same time is to experience “cognitive dissonance.” 

Being of two minds about your beliefs, ideas or values can be stressful and some find it difficult to live with the uncertainty. However, others have argued that remaining curious and wanting to learn more about what’s behind a dissonance of thoughts is a positive sign—if you’re to believe these much quoted words from F. Scott Fitzgerald.  

The test of a first-rate intelligence is the ability to hold two opposed ideas in the mind at the same time and still retain the ability to function.

While I was stewing under the heat dome last week, I was struck by the extent of my cognitive dissonance about my soon-to-come, AI-driven world. I’m stressed about the imminence of devices that will put as many external brains as I can accommodate into the palm of my hand in about a year and a half. 

As you know, I’ve expressed my awe as well as trepidation about this development several times before. 

It’s cognitive dissonance for me (Luddite vs. Brave Pioneer) because of the consequences involved, because I fully agree with the observer who noted this week:  “we’re not just building new tech, we’re rethinking the role of humans in systems.”  Let me repeat that.

We’re not just building new tech, we’re rethinking the role of humans in systems.

One of the uncertain frontiers for AI-driven tools like ChatGPT is in our schools, those learning environments where student brains are still developing. An essay this week and a recent study make a strong (early) case for the disaster we might expect. When students use a tool like ChatGPT to write their papers and respond to class assignments, their critical-thinking and argument-assembly skills either never develop at all or quickly begin to atrophy.

At some point in the arc of my education and yours, pocket calculators became ubiquitous. I already knew how to add, subtract, multiply and divide, but no longer needed to do so manually. By the time that happened, my basic calculation skills were so brain-embedded that I could still do all of those things without my short-cut device. But what was it like for those who never embedded those aptitudes in the first place?

Given the sudden availability of AI-driven personal assistants, are today’s students at risk of never embedding or retaining how to think through, express and defend their ideas? How to construct arguments and anticipate rebuttals? How to find their commitments and form an opinion?  Such devices could change their’s (and our) experience of being human.

A Wall Street Journal op-ed on Tuesday by Allysia Finley brings a fine point to these questions. She argues that in the brave new world where “smart computers” demand even smarter humans, tools like ChatGPT are effectively “dumbing us down” by enabling “cognitive offloading”—or allowing a readily available device to do our thinking for us. The risk (of course) is that we’ll end up with too many humans who can’t keep up with—let alone control—the increasingly intelligent computers that are just over the horizon. 

The real danger is that excessive reliance on AI could spawn a generation of brainless young people unequipped for the jobs of the future because they have never learned to think creatively or critically…[However] workers will need to be able to use AI and, more important, they will need to come up with novel ideas about how to deploy it to solve problems. They will need to develop AI models, then probe and understand their limitations.

(I don’t know which dystopia fills Finley’s imagination, but in mine I’m seeing the helpless/mindless lounge-potato humans in the Pixar classic Wall-E instead of Arnold struggling to confront Skynet in The Terminator.)

A student brain continues to develop until he or she is in their mid-20s, “but like a muscle it needs to be exercised, stimulated and challenged to grow stronger.” Chatbots “can stunt this development by doing the mental work that builds the brain’s version of a computer cloud….”

Why commit information to memory when ChatGPT can provide answers at your fingertips? For one thing, the brain can’t draw connections between ideas that aren’t there. Nothing comes from nothing. Creativity also doesn’t happen unless the brain is engaged. Scientists have found that ‘Aha!’ moments occur spontaneously with a sudden burst of high-frequency electrical activity when the brain connects seemingly unrelated concepts.

With AI-driven devices in the palms of our hands, Finley worries that humanity will have fewer of those experiences going forward.

This week, Time Magazine reported on a new study from MIT’s Media Lab whose results so alarmed its lead investigator that she published its results despite the relatively small sample-size in her study and its lack of peer review. 

The study divided 54 subjects—18 to 39 year-olds from the Boston area—into three groups, and asked them to write several SAT essays using OpenAI’s ChatGPT, Google’s search engine, and nothing at all, respectively. Researchers used an EEG to record the writers’ brain activity across 32 regions, and found that of the three groups, ChatGPT users had the lowest brain engagement and ‘consistently underperformed at neural, linguistic, and behavioral levels.’ Over the course of several months, ChatGPT users got lazier with each subsequent essay, often resorting to copy-and-paste by the end of the study.” [emphasis mine]

The researchers also suggested that the use of AI-driven tools which rely on LLMs (or large language models) can harm learning, especially for users whose brains are still developing—because ”your brain does need to develop in a more analog way.”

Both the Op-ed and MIT study examined the use of ChatGPT without either supervision or guidance from those who know how to use these tools to enhance instead of merely “off-load” the learning experience. Both assume that this AI assistant was merely asked to respond to a particular assignment without further exchange between the human and the device. So while their alarm deserves our attention, more interactive and better supervised teaching tools are attempting to harness AI’s awesome power to enhance (as opposed to degrade) cognitive abilities.  

For example, some other articles that I read this week describe how AI-driven tutors can not only increase highly valuable one-on-one learning experiences in the classroom but also enable students to learn far more than previously when interaction with such a “resource-full” device is tailored to their particular needs and learning styles.

The first encouraging story about AI tutors came from the World Economic Forum, writing about a Chinese program that aimed to find more qualified teachers, particularly in the countryside. As reported, some of the solution was provided by a company incongruously named Squirrel AI Learning.

This educational technology company tested students with a large adaptive model (LAM) learning system that “combines adaptive AI—which learns and adapts to new data—with education-specific multimodal models, which can process a wide range of inputs, including text, images and video.”  With new student profile information in hand, Squirrel created lesson plans that comprised “the most suitable learning materials for each student” with the aid of those external inputs, including: 

data from more than 24 million students and 10 billion learning behaviours, as well as ‘wisdom from the very best teachers from all over the world,’ according to founder Derek Haoyang Li….

With the enthusiasm of a pioneer, he told the Forum a year ago that he believes its AI tutor “could make humans 10 times smarter.”

Meanwhile a story in Forbes about a Harvard study was nearly as enthusiastic. 

The researchers concluded that new AI models “may usher in a wave of adaptive [tutor] bots catering to [a] student’s individualized pace and preferred style of learning.”  These tutoring models are engineered to include the best teaching practices and tactics, including: 

  • proactively engaging the student in the learning process;
  • managing information overload;
  • supporting and promoting a growth mindset;
  • moving from basic to complex concepts, while preparing for future units;
  • giving the student timely, specific and accurate feedback and information;
  • while enabling the learner to set their own pace.

The study’s findings indicated that AI-tutored students “learned more than twice as much as when they engaged with the same content during [a] lecture…[with] “personalized pacing being a key driver of success.”

Moreover, giving students greater control over their learning experience through “personalized on-demand design”:

allowed them to ask as many questions as they wished and address their personal points of confusion in a short period of time. Self-pacing meant that students could spend more time on concepts they found challenging and move quickly through material they understood, leading to more efficient learning….

As reported by Fox News in March, a Texas private school’s use of AI tutors has rocketed their student test scores to the top 2% in the country. With bots furthering academic learning, teachers can spend their hands-on time with students providing “motivational and emotional support.” The school’s co-founder said: “That is really the magic in our model.” 

While reading these AI-tutor stories, I realized that the new role for teachers in decades to come is not merely to motivate students and be supportive; our educators will also need to supervise, tweak and even design new tutorials. Like the algorithms that adapt while absorbing new data, they will need to continuously modify their interventions to meet the need of their students and maximize the educational benefits. 

In other words, they will need to be even smarter than the machines. 

Whether American teachers can surmount that tech-intensive hurdle is a question that will only be answered over time, but advances like the coming ubiquity of AI-tutors and the student performance gains that are likely to follow might encourage us to pay for greater tech proficiency on the part of teachers, to enable them to actually be  “mechanics” and “inventors” whenever adaptive learning models like these are deployed. 

As for my dissonance between the risks of over-reliance on large language models like ChatGPT and the promise of integrating adaptive learning models like AI-tutors in our classrooms, I guess I ended the week with enough optimism to believe that while some of our brainpower will be dissipated as the lazy among us forget how to think, far more in the generations that follow will become smarter than we ever imagined we could be.

+  +  +

The photos in this week’s post were taken of Lomanstraat, a street in Amsterdam, during the spring, summer and fall. These trees weren’t pruned to grow at an angle, instead they grew naturally towards the limited band of light. 

Here are this week’s comment(s), link(s) and image(s) regarding the state of our governance in light of new developments over this past few days.  

1.    With the bombing of Iran’s nuclear facilities, our president’s penchant for overstatement (“obliteration”) and vanity (the NATO chief’s feeling he needed to call him “Daddy”) makes our country vulnerable to being strung along (when our leader acts like a 2-year old with no patience) as well as manipulated (by whichever foreign leader is the best “Trump whisperer”?). Do Russia or China (or Canada, for that matter) seem to you to be cowed into submission—or even cooperation—by these antics and proclivities? The risk is that little will be gained, and much will be lost in this kindergarden of foreign policy when Trump’s dust finally settles.

2.    Besides his order to drop several bunker-busting bombs from American planes that had flown half-way around the world, another development of note this week came from the Supreme Court before it withdraws into its cone of silence for the next couple of months. It marked, of course, the high Court’s preventing any federal court in the future from entering an injunction (or stop order) regarding Trump’s executive actions that has nationwide effect.

Americans can still appeal to their local federal district court for (or against) an injunction in that jurisdiction, but another district court a few counties over can makes its own (and sometimes different) ruling about the same executive action. Commentators are in a lather, mostly because Trump’s next hair-brained executive order can’t be stopped nationwide by some plaintiff who finds a cooperative district court judge. For what it’s worth, I am less concerned than many of the bedwetters about this. 

The SCOTUS ruling in CASA Inc. won’t materially advance Trump’s agenda as much as invite a chaos of conflicting lower court actions which will make the fate of his various proclamations as unclear as most of them are already. Months or years from now, each instance of conflicting lower court rulings will make their way to the Supreme Court—along the same path that nationwide injunctions get there now—and a final ruling. In the meantime, CASA inc. means more of the same uncertainty and confusion instead of giving a material boost to the Strongman’s power. 

Here’s a link to CBS News coverage of the ruling for additional reactions.

3.     This from the NYT editorial board yesterday about Trump’s big beautiful tax reduction bill and the explosion in new interest payments it will add to the national debt. (For the first time in American history, interest payments on the debt will be greater than any other national expenditure, except for Medicare, if this bill becomes law):

The expected increase in the debt is particularly absurd because the government would borrow much of the money from the same people who got the biggest tax cuts from the bill. Roughly half of the government’s debt typically is sold to American investors, and those investors are disproportionately affluent. When the government borrows from them rather than raising taxes, it is getting the same money from the same people on less favorable terms. Instead of taxing the rich, the government pays them interest.

4.     Dictator Approved Statue appears without identifying its donor on the Capitol Mall this week. It’s not a sign of full-blown resistance, but it’s another sign of life from his opponents.

This post was adapted from my June 29, 2025 newsletter. Newsletters are delivered to subscribers’ in-boxes every Sunday morning, and sometimes I post the content from one of them here, in lightly edited form. You can subscribe by leaving your email address in the column to the right.

Filed Under: *All Posts Tagged With: AI tutor, Alysia Finley, chatbot, chatbots dumbing us down, ChatGPT, cognitive off-loading, Derek Haoyang LI, LAM, large adaptive model, lower brain engagement, MIT Media Lab, nation-wide injunctions, personalized learning, Squirrel AI Learning, World Economic Forum

About David

David Griesing (@worklifeward) writes from Philadelphia.

Read More →

Subscribe to my Newsletter

Join all the others who have new posts, recommendations and links to explore delivered to their inboxes every week. Please subscribe below.

David Griesing Twitter @worklifereward

My Forthcoming Book

WordLifeReward Book

Search this Site

Recent Posts

  • Using AI to Help Produce Independent, Creative & Resilient Adults in the Classroom September 10, 2025
  • Will AI Make Us Think Less or Think Better? July 26, 2025
  • The Democrat’s Near-Fatal “Boys & Men” Problem June 30, 2025
  • Great Design Invites Delight, Awe June 4, 2025
  • Liberating Trump’s Good Instincts From the Rest April 21, 2025

Follow Me

David Griesing Twitter @worklifereward

Copyright © 2025 David Griesing. All Rights Reserved.

  • Terms of Use
  • Privacy Policy