
Holding two opposing thoughts in your mind at the same time is to experience “cognitive dissonance.”
Being of two minds about your beliefs, ideas or values can be stressful and some find it difficult to live with the uncertainty. However, others have argued that remaining curious and wanting to learn more about what’s behind a dissonance of thoughts is a positive sign—if you’re to believe these much quoted words from F. Scott Fitzgerald.
The test of a first-rate intelligence is the ability to hold two opposed ideas in the mind at the same time and still retain the ability to function.
While I was stewing under the heat dome last week, I was struck by the extent of my cognitive dissonance about my soon-to-come, AI-driven world. I’m stressed about the imminence of devices that will put as many external brains as I can accommodate into the palm of my hand in about a year and a half.
As you know, I’ve expressed my awe as well as trepidation about this development several times before.
It’s cognitive dissonance for me (Luddite vs. Brave Pioneer) because of the consequences involved, because I fully agree with the observer who noted this week: “we’re not just building new tech, we’re rethinking the role of humans in systems.” Let me repeat that.
We’re not just building new tech, we’re rethinking the role of humans in systems.
One of the uncertain frontiers for AI-driven tools like ChatGPT is in our schools, those learning environments where student brains are still developing. An essay this week and a recent study make a strong (early) case for the disaster we might expect. When students use a tool like ChatGPT to write their papers and respond to class assignments, their critical-thinking and argument-assembly skills either never develop at all or quickly begin to atrophy.
At some point in the arc of my education and yours, pocket calculators became ubiquitous. I already knew how to add, subtract, multiply and divide, but no longer needed to do so manually. By the time that happened, my basic calculation skills were so brain-embedded that I could still do all of those things without my short-cut device. But what was it like for those who never embedded those aptitudes in the first place?
Given the sudden availability of AI-driven personal assistants, are today’s students at risk of never embedding or retaining how to think through, express and defend their ideas? How to construct arguments and anticipate rebuttals? How to find their commitments and form an opinion? Such devices could change their’s (and our) experience of being human.
A Wall Street Journal op-ed on Tuesday by Allysia Finley brings a fine point to these questions. She argues that in the brave new world where “smart computers” demand even smarter humans, tools like ChatGPT are effectively “dumbing us down” by enabling “cognitive offloading”—or allowing a readily available device to do our thinking for us. The risk (of course) is that we’ll end up with too many humans who can’t keep up with—let alone control—the increasingly intelligent computers that are just over the horizon.
The real danger is that excessive reliance on AI could spawn a generation of brainless young people unequipped for the jobs of the future because they have never learned to think creatively or critically…[However] workers will need to be able to use AI and, more important, they will need to come up with novel ideas about how to deploy it to solve problems. They will need to develop AI models, then probe and understand their limitations.
(I don’t know which dystopia fills Finley’s imagination, but in mine I’m seeing the helpless/mindless lounge-potato humans in the Pixar classic Wall-E instead of Arnold struggling to confront Skynet in The Terminator.)
A student brain continues to develop until he or she is in their mid-20s, “but like a muscle it needs to be exercised, stimulated and challenged to grow stronger.” Chatbots “can stunt this development by doing the mental work that builds the brain’s version of a computer cloud….”
Why commit information to memory when ChatGPT can provide answers at your fingertips? For one thing, the brain can’t draw connections between ideas that aren’t there. Nothing comes from nothing. Creativity also doesn’t happen unless the brain is engaged. Scientists have found that ‘Aha!’ moments occur spontaneously with a sudden burst of high-frequency electrical activity when the brain connects seemingly unrelated concepts.
With AI-driven devices in the palms of our hands, Finley worries that humanity will have fewer of those experiences going forward.
This week, Time Magazine reported on a new study from MIT’s Media Lab whose results so alarmed its lead investigator that she published its results despite the relatively small sample-size in her study and its lack of peer review.
The study divided 54 subjects—18 to 39 year-olds from the Boston area—into three groups, and asked them to write several SAT essays using OpenAI’s ChatGPT, Google’s search engine, and nothing at all, respectively. Researchers used an EEG to record the writers’ brain activity across 32 regions, and found that of the three groups, ChatGPT users had the lowest brain engagement and ‘consistently underperformed at neural, linguistic, and behavioral levels.’ Over the course of several months, ChatGPT users got lazier with each subsequent essay, often resorting to copy-and-paste by the end of the study.” [emphasis mine]
The researchers also suggested that the use of AI-driven tools which rely on LLMs (or large language models) can harm learning, especially for users whose brains are still developing—because ”your brain does need to develop in a more analog way.”
Both the Op-ed and MIT study examined the use of ChatGPT without either supervision or guidance from those who know how to use these tools to enhance instead of merely “off-load” the learning experience. Both assume that this AI assistant was merely asked to respond to a particular assignment without further exchange between the human and the device. So while their alarm deserves our attention, more interactive and better supervised teaching tools are attempting to harness AI’s awesome power to enhance (as opposed to degrade) cognitive abilities.
For example, some other articles that I read this week describe how AI-driven tutors can not only increase highly valuable one-on-one learning experiences in the classroom but also enable students to learn far more than previously when interaction with such a “resource-full” device is tailored to their particular needs and learning styles.

The first encouraging story about AI tutors came from the World Economic Forum, writing about a Chinese program that aimed to find more qualified teachers, particularly in the countryside. As reported, some of the solution was provided by a company incongruously named Squirrel AI Learning.
This educational technology company tested students with a large adaptive model (LAM) learning system that “combines adaptive AI—which learns and adapts to new data—with education-specific multimodal models, which can process a wide range of inputs, including text, images and video.” With new student profile information in hand, Squirrel created lesson plans that comprised “the most suitable learning materials for each student” with the aid of those external inputs, including:
data from more than 24 million students and 10 billion learning behaviours, as well as ‘wisdom from the very best teachers from all over the world,’ according to founder Derek Haoyang Li….
With the enthusiasm of a pioneer, he told the Forum a year ago that he believes its AI tutor “could make humans 10 times smarter.”
Meanwhile a story in Forbes about a Harvard study was nearly as enthusiastic.
The researchers concluded that new AI models “may usher in a wave of adaptive [tutor] bots catering to [a] student’s individualized pace and preferred style of learning.” These tutoring models are engineered to include the best teaching practices and tactics, including:
- proactively engaging the student in the learning process;
- managing information overload;
- supporting and promoting a growth mindset;
- moving from basic to complex concepts, while preparing for future units;
- giving the student timely, specific and accurate feedback and information;
- while enabling the learner to set their own pace.
The study’s findings indicated that AI-tutored students “learned more than twice as much as when they engaged with the same content during [a] lecture…[with] “personalized pacing being a key driver of success.”
Moreover, giving students greater control over their learning experience through “personalized on-demand design”:
allowed them to ask as many questions as they wished and address their personal points of confusion in a short period of time. Self-pacing meant that students could spend more time on concepts they found challenging and move quickly through material they understood, leading to more efficient learning….
As reported by Fox News in March, a Texas private school’s use of AI tutors has rocketed their student test scores to the top 2% in the country. With bots furthering academic learning, teachers can spend their hands-on time with students providing “motivational and emotional support.” The school’s co-founder said: “That is really the magic in our model.”
While reading these AI-tutor stories, I realized that the new role for teachers in decades to come is not merely to motivate students and be supportive; our educators will also need to supervise, tweak and even design new tutorials. Like the algorithms that adapt while absorbing new data, they will need to continuously modify their interventions to meet the need of their students and maximize the educational benefits.
In other words, they will need to be even smarter than the machines.
Whether American teachers can surmount that tech-intensive hurdle is a question that will only be answered over time, but advances like the coming ubiquity of AI-tutors and the student performance gains that are likely to follow might encourage us to pay for greater tech proficiency on the part of teachers, to enable them to actually be “mechanics” and “inventors” whenever adaptive learning models like these are deployed.
As for my dissonance between the risks of over-reliance on large language models like ChatGPT and the promise of integrating adaptive learning models like AI-tutors in our classrooms, I guess I ended the week with enough optimism to believe that while some of our brainpower will be dissipated as the lazy among us forget how to think, far more in the generations that follow will become smarter than we ever imagined we could be.
+ + +
The photos in this week’s post were taken of Lomanstraat, a street in Amsterdam, during the spring, summer and fall. These trees weren’t pruned to grow at an angle, instead they grew naturally towards the limited band of light.

Here are this week’s comment(s), link(s) and image(s) regarding the state of our governance in light of new developments over this past few days.
1. With the bombing of Iran’s nuclear facilities, our president’s penchant for overstatement (“obliteration”) and vanity (the NATO chief’s feeling he needed to call him “Daddy”) makes our country vulnerable to being strung along (when our leader acts like a 2-year old with no patience) as well as manipulated (by whichever foreign leader is the best “Trump whisperer”?). Do Russia or China (or Canada, for that matter) seem to you to be cowed into submission—or even cooperation—by these antics and proclivities? The risk is that little will be gained, and much will be lost in this kindergarden of foreign policy when Trump’s dust finally settles.
2. Besides his order to drop several bunker-busting bombs from American planes that had flown half-way around the world, another development of note this week came from the Supreme Court before it withdraws into its cone of silence for the next couple of months. It marked, of course, the high Court’s preventing any federal court in the future from entering an injunction (or stop order) regarding Trump’s executive actions that has nationwide effect.
Americans can still appeal to their local federal district court for (or against) an injunction in that jurisdiction, but another district court a few counties over can makes its own (and sometimes different) ruling about the same executive action. Commentators are in a lather, mostly because Trump’s next hair-brained executive order can’t be stopped nationwide by some plaintiff who finds a cooperative district court judge. For what it’s worth, I am less concerned than many of the bedwetters about this.
The SCOTUS ruling in CASA Inc. won’t materially advance Trump’s agenda as much as invite a chaos of conflicting lower court actions which will make the fate of his various proclamations as unclear as most of them are already. Months or years from now, each instance of conflicting lower court rulings will make their way to the Supreme Court—along the same path that nationwide injunctions get there now—and a final ruling. In the meantime, CASA inc. means more of the same uncertainty and confusion instead of giving a material boost to the Strongman’s power.
Here’s a link to CBS News coverage of the ruling for additional reactions.
3. This from the NYT editorial board yesterday about Trump’s big beautiful tax reduction bill and the explosion in new interest payments it will add to the national debt. (For the first time in American history, interest payments on the debt will be greater than any other national expenditure, except for Medicare, if this bill becomes law):
The expected increase in the debt is particularly absurd because the government would borrow much of the money from the same people who got the biggest tax cuts from the bill. Roughly half of the government’s debt typically is sold to American investors, and those investors are disproportionately affluent. When the government borrows from them rather than raising taxes, it is getting the same money from the same people on less favorable terms. Instead of taxing the rich, the government pays them interest.
4. Dictator Approved Statue appears without identifying its donor on the Capitol Mall this week. It’s not a sign of full-blown resistance, but it’s another sign of life from his opponents.

This post was adapted from my June 29, 2025 newsletter. Newsletters are delivered to subscribers’ in-boxes every Sunday morning, and sometimes I post the content from one of them here, in lightly edited form. You can subscribe by leaving your email address in the column to the right.