David Griesing | Work Life Reward Author | Philadelphia

  • Blog
  • About
    • Biography
    • Teaching and Training
  • Book
    • WorkLifeReward
  • Subscribe to my Newsletter
  • Contact
You are here: Home / Archives for technology

The Amish Test & Tame New Technologies Before Adopting Them: We Can Learn How to Safeguard What’s Important to Us Too

October 13, 2020 By David Griesing Leave a Comment

Given the speed of innovation and the loftiness of its promises to improve our comfort or convenience, we often embrace a new technology long before we experience its most worrisome consequences.  As consumers, we are pushed to adopt new tech (or tech-driven services) by advertising that “understands” our susceptibilities, by whatever the Joneses are doing next door, and by the speculation “that somehow it will make our lives better.” The sticker shock doesn’t come until we realize that our natural defenses have been overwhelmed or we’ve been herded by marketers like so many sheep.

By tech devices and services, I’m thinking about our personal embrace of everything from smart phones to camera-ready doorbells, from Google’s search engine to Amazon’s Prime memberships, from car-hailing services like Uber to social networks like Facebook. Only after we’ve built our lives around these marvels do we start recognizing their downsides or struggle with the real costs that got buried in their promises and fine print.

As consumers, we feel entitled to make decisions about tech adoption on our own, not wishing to be told by anybody that “we can buy this but can’t buy that,” let alone by authorities in our communities who are supposedly keeping “what’s good for us” in mind. Not only do we reject a gatekeeper between us and our “Buy” buttons, there is also no Consumer Reports that assesses the potential harms of these technologies to our autonomy as decision-makers, our privacy as individuals, or our democratic way of life — no resource that warns us “to hold off” until we can weigh the long-term risks against the short-term rewards. As a result, we defend our unfettered freedom until we start discovering just how terrible our freedom can be.

If there were consumer gatekeepers or even reliable guidebooks, they could evaluate the suitability of new technologies not just for individuals but also for groups of consumers. Before community adoption, they’d consider whether a new innovation serves particular priorities in the community, asking questions like:

– Will smartphones make us more or less distracted?

– Will on-line video games like Fortnite strengthen or weaken our families?

– Does freedom from outside manipulation outweigh the value of, say, Facebook’s social network or Google’s search engine, since both sell others (from marketers to governments) personal information about our use of their platforms so that these outsiders can manipulate us further given what they are learning about us?

Gatekeepers that are worried about such things might even urge testing of new technologies before they’re marketed and sold so that: the initial hype doesn’t become the last word in buying decisions; the crowd-sourced wisdom of advance users can be publically gathered and assessed; and recommendations that consider the up- and down-sides become possible.
 
By welcoming testing data from across the community, this kind of gatekeeper authority would likely gain legitimacy from the strength of its feedback loop. Back-and-forth reactions would aim to discover “what is good (and not so good) for us” instead of merely relying upon tech company claims about convenience or cost-savings. Before endorsing a new device or tech-driven service, these testers would take the time to ensure that it serves the human purposes that are most important to the group while also recommending suitable safeguards (like age or use restrictions). Moderated time trials would be like previewing and rating new TV shows before their general release.
 
What I’m proposing is a community driven, rigorously interactive and “take as much time as needed” approach to new tech adoption that — to our free-market ears — might sound impossibly utopian. But it’s already happening in places like Pennsylvania, Ohio and Indiana, and has been for generations. Amish gatekeepers and community members continuously test and tame new technologies, making them conform to their view of what is good for them, with startling and even inspiring results.

Startled, then inspired were certainly my reactions to a story about the Amish that Kevin Kelly told Tim Ferriss in his podcast a few years back. It led me to a Kelly essay about Amish Hackers, a post from a different storyteller about an Amish community’s “experimentation” with genetic technologies to fight inherited diseases, and other dispatches from this rarely consulted edge of American life. (Kevin Kelly is one of the founding editors of Wired magazine and a firm believer that wandering beyond the familiar is the most effective education you can get.) I’d argue there are broader lessons to be taken from Kelly’s and other sojourners’ perspectives about how Amish communities have been grappling with new technologies, particularly when you start (as they do) with a sense of awe that skews less towards “what’s in it for me right now” and more towards pursuit of the greater good over time.

As Kelly followed his curiosity, he noticed that the Amish seem to choose all of their gadgets or tech-driven services “collectively as a group.” Because it’s a collaborative endeavor throughout, they have to start with “the criteria” that they’ll use in their selection process.

When a new technology comes along they say, ‘Will this strengthen our local community or send us out [of it]?’ The second thing that they’re looking at is what’s good for their families. The goal of the typical Amish man or woman is to have every single meal with their children until they leave home.

So they also ask: will a tech-driven innovation increase the quality of our family time together, or somehow lessen it?

Since owning your own car will take you away from your community, they frown on automobiles, favoring more localized forms of transit like the horse and buggy. Similarly, because electricity ties you to a public energy grid and makes the community dependent on outsiders, they limit its use, preferring fuel, wind or sun-powered energy controlled from their homes and workshops. At the same time, while Amish beliefs are founded on the principle that their community should remain “in the world, but not of it,” their inward focus has never dampened their curiosity about new technologies or the practical advantages they might gain by utilizing them.

Strengthening family ties dictates the pace and manner of their tech adoption too. While the Amish engage in a broad spectrum of industries, their work places tend to be close to home so that workers can spend meal times with their families. And there are additional benefits to this proximity. Because the Amish are effectively living and working in the same place, the technology they rely upon to forge farm equipment, make furniture or process their produce tends to be friendly to the land and the people living there. In other words, instead of exporting the environmental and social costs of their economic activities, their means of production are also sustainable for the Amish families that live nearby.

While these criteria seem to imply a kind of primitive simplicity, the reality couldn’t be more different. One wrinkle is the way the Amish distinguish between owning technology and merely using it. For example, those who need the internet at work or school might share that access instead so it’s available for an intended purpose (like operating a business or learning) but not for getting lost in distraction whenever, say, a laptop owner feels like it.

Old iron adapted to run on propane

Their work-arounds for living and working off-the-grid are also ingenious. Sometimes instead of electricity, they’ll use gas- or propane-fueled appliances and equipment. The Amish also adapt a startling array of machines and other contraptions to use pneumatic or compressed-air power. Of the later, Kelly writes:

At first pneumatics were devised for Amish workshops [where compressed air systems powers nearly every machine], but it was seen as so useful that air-power migrated to Amish households. In fact there is an entire cottage industry in retrofitting tools and appliances to [so-called] Amish electricity. The retrofitters buy a heavy-duty blender, say, and yank out the electrical motor. They then substitute an air-powered motor of appropriate size, add pneumatic connectors, and bingo, your Amish mom now has a blender in her electrical-less kitchen. You can get a pneumatic sewing machine, and a pneumatic washer/dryer (with propane heat). In a display of pure steam-punk nerdiness, Amish hackers try to outdo each other in building pneumatic versions of electrified contraptions.

How some Amish communities began utilizing genetically modified seeds on their farms — after the customary period of trial and error — also illustrate how their priorities drive their decisions. Unlike the huge turbines used in commercial agriculture, their old, but highly effective (and debt-free) farm equipment could not harvest the pest-weakened cornstalks that GMOs were designed to fight. Amish farmers embraced this seed innovation because they could continue to use their harvesters in a cost-effective manner with little apparent downside. On the other hand, the Amish jury is still out on cellphones. But instead of banning them outright, they are still trying to figure out which uses are good for them and which are to be avoided. In his essay, Kelly celebrated their endless beta testing, both here and in many other areas:

This is how the Amish determine whether technology works for them. Rather than employ the precautionary principle, which says, unless you can prove there is no harm, don’t use new technology, the Amish rely on the enthusiasm of Amish early adopters to try stuff out until they prove harm.

When downsides become apparent, they find ways to minimize them (again, sharing phones instead of owning them) or to eliminate them altogether for community members (like young people) who are most prone to their harms. It’s a time-intensive process where an Amish bishop or gatekeeper can always step in to forbid them, but there is usually a dizzying array of experimentation before that happens.

These time trials may place the Amish as much as 50 years behind the rest of us in terms of tech adoption — “slow geeks” Kelly calls them — but he finds their manner of tech adoption “instructive” and so do I.

1) They are selective. They know how to say ‘no’ and are not afraid to refuse new things. They ban more than they adopt.

2) They evaluate new things by experience instead of by theory. They let the early adopters get their jollies by pioneering new stuff under watchful eyes.

3) They have criteria by which to select choices: technologies must enhance family and community and distance themselves from the outside world.

4) The choices are not individual, but communal. The community shapes and enforces technological direction.

As a result, the Amish are never going to wake up one day and discover that a generation of their teenagers has become addicted to video games; that smartphones have reduced everyone’s attention span to the next externally-generated prompt; or that surveillance capitalism has “suddenly” reduced their ability to make decisions for themselves as citizens, shoppers, parents or young people.

Given where most of us non-Amish find ourselves today, we’d likely be unwilling (at least at first) to step back from the edge of the technology curve for the sake of discovering what a new technology “is all about”—for worse as well as for better—before adapting our lives around it. 

In Western cultures, individuals as consumers may have criteria for purchasing or adopting new technologies—like lower cost or greater convenience—but it seems almost impossible to believe that we’d ever be willing to bring others (beyond say a parent or life partner) into this highly personal decision-making process.  

Indeed, our individualism as consumers seems so complete that it’s difficult to envision any community whose criteria we would willingly subject ourselves to for the common good. Or as Kelly puts it: we’d have to learn an entirely new skill, which is how “to relinquish” technologies and tech-driven services “as a group” until their efficacy, under the group’s standards, could be demonstrated.

So is it unlikely? “Yes.” But impossible? “No.” And what about desirable? I would argue that learning how to take-the-best-and-leave-the-rest when it comes to adopting new technologies is a consumer-wide competence that’s long overdue.

The Amish are clear that strengthening community and family are the primary goods for them. Like us, they’re drawn to “more convenient” and “less costly” too, but only if these lesser priorities can be made to serve their most important ones.  At the same time, they’ll work long and hard to find accommodations for the sake of convenience or low cost by crowd-sourcing their experiences and considering all of the necessary angles before deciding how to proceed. They’re also willing to be one step or even several behind the technology curve. And when they can’t get over the hurdle of likely or actual harms with a product or service, they’ll put it behind them and move on without it. 

At this point, it bears mentioning that Amish families and communities are not exemplary in terms of “goodness,” and they don’t claim to be. Indeed, their faith tends to make them more aware of their spiritual vulnerabilities than lesser believers, so they’ll readily acknowledge their sinfulness and struggles with temptation. On the other hand, their awareness of sin also distinguishes them from most of the rest of us. Compared to the Amish, we are relatively thoughtless about what is more and less “good for us,” especially in the long run.

That means our next step would be a big one. The unfettered freedom that we “enjoy” around what we buy and end up adopting makes it difficult for us to band together with others and agree to be subject to any group’s veto power. Our ad-based, consumer-driven economies have hooked us on instant gratification to the point that most of us would be unwilling (at least initially) to wait until the other beta testers in our group have finished their work and a consensus for the greater good could be reached.  

On the other hand, given the deluge of new consumer technologies that keeps washing over us and the troubling consequences that come with many of them—like the community weakening propensities of “smart” doorbells and the privacy destroying nature of “smart” home assistants—we might be better off if we joined with others to learn more about what’s involved before embracing “the next shiny new thing” and discovering the downsides later. 

We could learn the restraint of slowing down, the power of beta-testing new technologies, and the connectedness of considering what we discover with our fellow experimenters before jumping head-first into unchartered waters. 
 
And perhaps most importantly, we could learn how to come to a collective agreement on the criteria for assessing whether a new technology is likely to be good for us, bad for us, or only acceptable with safeguards in place before adoption.  

– What priorities would we test against as we experiment with new products and services? 

– What assessment criteria would we apply in our consumer reporting about the next smart speakers, cell phone apps, facial recognition tools or geo-tracking devices? 

– How could an interactive gatekeeper group like this avoid becoming a 21st Century version of the Legion of Decency?

On this last point, any consumer protection group would certainly have to tone down the holier-than-thou attitude in its crowd-sourced application of first principles. As tech testers and reporters, the group would need to say: “we don’t know better than you, we’ve just thought about it from various, specific angles, and here’s how.”

Instead of authority residing in an Amish bishop, the wisdom of this group of early adopters and community members could be captured in an evolving body of experience that is informed by both the testers’ feedback (like Yelp’s) as well as by moderating influences on the direction of the debate (like the guidance of Wikipedia editors). Built this way, arguments about what is likely to be good or bad for everyone will always embrace a broader perspective than that of any single tech influencer or seller. In fact, the counter-weight of a consumer protection group to each of us being “on our own” with consequential technology choices would be one of this group’s two greatest strengths.

The other would be pushing a leading edge of tech consumers to decide what is important to them and worth protecting with the strength of their numbers in the free market.

A consumer protection group like this would begin by deciding on the zones it would be committed to safeguarding. They might be our zones of personal privacy (from those who wish to exploit our data for their gain as opposed to ours) and autonomous decision-making (from those who aim to use our behavioral information to manipulate our choices). Group criteria could also include protecting socially or economically vulnerable populations (like the susceptible young or old, or even the self-employed doing ride-hailing, delivery or other gig-economy work) from exploitation or harm by new tech products and services. The group’s overall aim would be to offer a persuasive new perspective to a critical mass of the tech consuming public before we decide to consume a new technology.

Their invitation might sound something like this:

Given our stated priorities, we urge you to slow down your purchases and hold off on your adoption of this new technology until — because it will always take time — its likely impacts can be assessed.  We, in turn, will provide you with regular updates as our assessment of the risks and benefits as our experience with this new technology evolves.

Group creation of a public interface that provides criteria-driven, crowd-sourced information about new technology would almost certainly have an additional benefit in the marketplace. As the group’s standing and credibility is established, it’s assessments would likely influence tech companies to be more forthcoming about the potential downsides of their products and services before we’re introduced to them, and even whether they keep fraught technologies on a path to market.

Instead of individual consumers (on the one hand) or government regulators (on the other) trying to figure out how to put the ketchup back in the bottle or toothpaste back in the tube once they’ve made a mess of things, the wisdom of a consumer protection group with “greater good” priorities could serve as a counterweight before a new technology’s stains become permanent.

The group could function like a crowd-sourced Consumer Reports, publishing its assessments on a quality-controlled Wikipedia-type page that every consumer can see, with the aim of laying out the risks (as well as rewards) of new technologies before they’re widely adopted.

The Amish have found a way to test and to tame new technologies so that their priorities of family and community are continuously served.

Aren’t there enough of the rest of us — united in our concern about privacy, surveillance and on-line manipulation — to test and then tame these same technologies?

This post was adapted from my October 11, 2020 newsletter. Newsletters are delivered to subscribers’ in-boxes every Sunday morning. You can subscribe too by leaving your email address in the column to the right.

Filed Under: *All Posts, Being Part of Something Bigger than Yourself, Building Your Values into Your Work, Continuous Learning, Entrepreneurship, Heroes & Other Role Models, Work & Life Rewards Tagged With: Amish, assess technology before adopting, community priorities, family priorities, human centered technology, Kevin Kelly, tech-powered services, technology, technology gatekeepers

Technology is Changing Us

February 4, 2020 By David Griesing Leave a Comment

When we change our routines in a fundamental way—either because we need that change or the interruption is foisted upon us—we sometimes experience our world differently when we return.
 
Glimpses of those differences are possible after vacations, but they usually need to be long enough and far enough away. These differences in perspective also need to become realizations: our conscious efforts “to capture” what our time away “was really about” and consider its impacts “on what we do next.” At this point, the contrast between before and after might be bold enough to change our outlook going forward—like eat more pasta or dance everyday—but these realizations seldom change the basics about our living, our working or how we think about them. They’re more like souvenirs.
 
Clearer and longer breaks between departure and return generally have a greater impact because there’s more time to ponder the differences between this new place and the one we left behind. When we return to where we started, we are able to compare how it seemed before with how it seems to us now in light of the new perspectives that we’ve gained. As a result of these realizations, we sometimes do change our basic routines or broaden our rationales for doing them.
 
Insights about what-came-before, what-came-next and now-that-you’re-back can be even more profound if your physical or mental abilities changed during this interval. For example, you needed a new environment because you were injured in some way or found yourself facing an unfamiliar limitation. Only after time-away were you healed enough to return to the world you had left behind. Your judgments can be more nuanced when the changes to your body or spirit have also sharpened your awareness of where you’ve been and where you find yourself now.
 
Insights about before, next and now might be sharper still if changes in your perceptual abilities were behind your initial departure. If, say, you’d been partially blinded and had to rely on the heightened senses that remained to “map” the new environment where you retreated and the old one that you returned to “with new eyes.”
 
Finally, your insights might be at their sharpest and most valuable if the world you left had also changed in some fundamental way in the months or years before your return. The heightened awareness that you gained while away would be encountering this new topography for the first time. It is this final vantage point that Howard Axelrod brings to his new book, The Stars in Our Pockets: Getting Lost and Sometimes Found in the Digital Age.
 
Axelrod’s short story is that he was accidentally blinded in one eye during a college basketball game, took the next 5 years to graduate and recover physically, and spent the two years that followed living off the grid and reorienting himself with his natural environment in the woods of northeastern Vermont. After his time away, he began a teaching career at two urban universities.

Between his partial loss of sight and his return to civilization, smartphones had not only become ubiquitous, but in startling contrast to his back-woods life, these “stars in our pockets” seemed to be changing “how we navigated the world” right in front of him. It was an insight that might not have been possible if the contrasts between the world he’d departed, the one he retreated to and the one he re-entered had been less stark, or the realizations that he took from his experiences had been less acute.
 
In the “cognitive environment” of northern Vermont, Axelrod deepened his sense perceptions, made lucky discoveries as he wandered in the outdoors, and cultivated a sense of curiosity and patience that had been commonplace for much of human history. He learned to pay attention to the weather, the seasonal changes, the time of day, the life of the forest around him, and realized that doing so reinforced a particular kind of “mental map” that enabled his understanding of the world and how he could find his way through it. When Axelrod returned to urban life, he realized that the smartphones people were now holding as they walked down the street or sat across from one another at lunch were changing how almost everyone—including him—understood and experienced the world. In other words, the mental map that a smartphone enables is fundamentally different from the mental map he’d been using to navigate during his time off the grid.
 
The message in Axelrod’s book is not that one map is better than the other. His writing is more “meditation” (as he calls it) than argument or indictment. Instead, he wants to highlight some of the complications that can arise when you alternate between how humans have always navigated their lives and work and the new ways of doing so that mediating devices like our smartphones have enabled. In a recent interview and from postings on his website, Axelrod wants to convey what happens when adapting to a new environment means “losing traits that you valued” in your first one.

Just as we’re losing diversity of plant and animal species due to the environmental crisis, so too are we losing the diversity and range of our minds due to changes in our cognitive environment.

Several of these losses are worth our noticing with him. For example,:

–Tech tools may replace natural aptitudes and weaken the memories that they depend upon. Axelrod suggests that relying on GPS to navigate undermines not only the serendipity that often comes “when you’re finding your own way,” but also your reliance on innate navigational memories so that you don’t get lost. Axelrod says:

Our memory is tied inextricably to place. In our brains, the memory center, the hippocampus, is the same center for cognitive mapping — figuring out the route you’re going to take. If we’re no longer using our brains to navigate [and] coming up with these cognitive maps, studies show that we start to have problems with other kinds of memory.

–External prompts change our attention spans.  As we grow more accustomed to on-line suggestions before taking the next step, autonomous actions—including immersing yourself in an activity and entering into what psychologists call productive “flow states”—become more difficult. 

What [American philosopher William] James [once] said is that an attention span is made of curiosity. It’s the ability to ask subtly different questions. Whether you’re talking about intellectual attention, or sensorial attention, if you’re looking at a tree or watching a bird. Are you asking subtly different questions? Can you ask a question about one facet and then another? It feels like you’re paying attention steadily, but you’re really paying attention to a lot of different things, driven by your curiosity.

Online, there’s always something prompting your attention. It’s like a pseudo-curiosity. It comes in and will give you the next thing to purchase, the next article to read, the next video clip to watch. You don’t have to ask the next question — it’s provided for you. Your attention span will shorten because you don’t need to ask those questions, you don’t need to drive your own attention.

–Rapid-fire “likes” on-line also requires much less involvement from us than empathy requires off-line. Axelrod notes how disorienting it can be as we shuttle between our tech-enabled environments and the rest of our lives and work, where we often need to come to what he describes as “slower” understandings of one another:

[W]hen you’re on social media, part of what’s being called for is attention that can shift really rapidly from one post to another post. And also what’s being called for is a kind of judgment: Do you like this? Do you love this? Do you retweet this? Whereas in real life, what’s called for is a slower attention, where you’re able to listen, be patient while the person is pausing, thinking, not quite sure what they’re saying. And also what’s called for is to defer judgment, or not judge at all. To have empathy. Those are very different traits, depending on which environment you’re in.

–It is hard to reconcile or internalize the different, competing ways that we use to navigate our on-line and off-line realities. Moreover, the world we experience behind the screen can become a substitute for (or even replace) the frameworks that come from navigating in the off-line world. What we risk losing, says Axelrod, are:

our connection to something larger than ourselves, our sense of perspective, our sense of what came before us or what will come after, our sense of being a part of the natural world — that doesn’t really show up anywhere on the maps on our phones.

As we adapt to a virtual world, we’re often disoriented because its cognitive maps are so different and  “we’re effectively living in two places at once.” But our adaptations change how our minds work too. In what Axelrod calls “neural Darwinism,” a kind of “natural selection” also happens “on both sides of your eyes” as we adapt to living and working through our screens. “[C]ertain populations of neurons get selected and their connections grow stronger, while others go the way of the dodo bird.” In other words, the faculties that we exercise on-line grow stronger, while those from the off-line world that we rely upon less frequently weaken from disuse.
 
These losses are tangible: Remembering how to navigate the world without on-line short-cuts. The longer attention spans that we need for concentration. The slower attention spans we need for empathy. Perspectives that extend from the past and into the future. Feelings that we are a part of the natural world.

Our smartphones and other virtual companions are changing our capabilities in each of these ways, but like that frog in water coming to a slow boil, too many of us may be lulled into complacency by the warmth of their star-power. 
 
Axelrod returned from the Vermont woods when the rest of us were already caught up in their magic. With the heightened sense of being human that came from his own particular odyssey, he could see more clearly not only what we’d been gaining while he was away but also might be losing as we gradually moved off our old navigational maps and started our pell-mell quest to adapt to very different ones. 
 
The map at the top of this post illustrates how navigation, weather, visibility, air pollution—a dozen different variables—might change in light of the fires that have recently burned through much of the western US. A poor attempt at metaphor (perhaps), but many of the fires on this map also originated in northern California, where many of the technologies behind our smartphones originated.
 
These “stars in our pockets” with their shortcuts, search engines and diversions are causing us to adapt to the navigational demands of an entirely new environment, where the potential costs of doing so include the loss of deep-seated memory, the ability to make our own choices, and discomfort with the “slow art” of interacting with others. Because we don’t exercise these aptitudes when navigating our new mental maps, we risk losing them as we attempt to navigate the old maps of our parallel, off-line worlds.
 
In a December post, I shared Tristan Harris’s theory that our brains may simply not be able to handle the challenges posed by these tech-driven interfaces. Harris went on to argue that the overwhelming information they provide also produces a kind of learned helplessness in us that’s not so different from where the frog, coming to a slow boil, finds herself.
 
The trick, I think, is making a deliberate effort to exercise the human capabilities that enabled us to navigate the world before these awesome devices came along—not letting them atrophy—even if we have to spend some equivalent of Howard Axelrod’s time in the northern Vermont woods to come to that realization.
 
We may need the sharpness, the clarity, of something like his departure and return to notice that much seems to be going awry before we resolve to do something about it.

This post was adapted from my February 2, 2020 newsletter. Newsletters are delivered to subscribers’ in-boxes every Sunday morning and the contents of some of them are later posted here. If you’d like to receive a weekly newsletter, you can subscribe by leaving your email address in the column to the right. 

Filed Under: *All Posts, Being Part of Something Bigger than Yourself, Continuous Learning, Daily Preparation, Work & Life Rewards Tagged With: cognitive maps, departure and return, Howard Axelrod, human aptitudes, human perspectives, mental maps, navigating the on-line world, smartphones, tech devices, technology, Tristan Harris

Finding the Will to Protect Our Humanity

December 16, 2019 By David Griesing Leave a Comment

I want to share with you a short, jarring essay I read in the New York Times this week, but first a little background. 
 
For some time now, I’ve been worrying about how tech companies (and the technologies they employ) harm us when they exploit the very qualities that make us human, like our curiosity and pleasure-seeking. Of course, one of the outrages here is that companies like Google and Facebook are also monetizing our data while they’re addicting us to their platforms. But it’s the addiction-end of this unfortunate deal (and not the property we’re giving away) that bothers me most, because it cuts so close to the bone. When they exploit us, these companies are reducing our autonomy–or the freedom to act that each of us embodies. 
 
Today, it’s advertising dollars from our clicking on their ads, but  tomorrow, it’s mind-control or distraction addiction: the alternate (and equally terrible) futures that George Orwell and Aldous Huxley were worried about 80 years ago in the cartoon essay I shared with you a couple of weeks ago.
 
In “These Tech Platforms Threaten Our Freedom,” a post from exactly a year ago, I tried to argue that the price for exchanging our personal data for “free” search engines, social networks and home deliveries is giving up more and more control over our thoughts and willpower. Instead of responding “mindlessly” to tech company come-ons, we could pause, close our eyes, and re-think our knee-jerk reactions before clicking, scrolling, buying and losing track of what we should really want. 
 
But is this mind-check even close to enough?
 
After considering the addictive properties of on-line games (particularly for adolescent boys) in a post last March, the reply was a pretty emphatic “No!”  Games like Fortnite are using the behavioral information they syphon from young players to reduce their ability to exit the game and start eating, sleeping, doing homework, going outside or interacting (live and in person) with friends and family.
 
But until this week, I never thought that maybe our human brains aren’t wired to resist the distracting, addicting and autonomy-sapping power of these technologies. 
 
Maybe we’re at the tipping point where our “fight or flight” instincts are finally over-matched.
 
Maybe we are already inhabiting Orwell’s and Huxley’s science fiction. 
 
(Like with global warming, I guess I still believed that there was time for us to avoid technology’s harshest consequences.)
 
When I read Tristan Harris’s essay “Our Brains Are No Match for Our Technology” this week, I wanted to know the science, instead of the science fiction, behind its title. But Harris begins with more of a conclusion than a proof, quoting one of the late 20th Century’s most creative minds, Edward O. Wilson. When asked a decade ago whether the human race would be able to overcome the crises that will confront us over the next hundred years, Wilson said:

Yes, if we are honest and smart. [But] the real problem of humanity is [that] we have Paleolithic emotions, medieval institutions and godlike technology.

Somehow, we have to find a way to reduce this three-part dissonance, Harris argues. But in the meantime, we need to acknowledge that “the natural capacities of our brains are being overwhelmed” by technologies like smartphones and social networks.

Even if we could solve the data privacy problem, humanity will still be reduced to distraction by encouraging our self-centered pleasures and stoking our fears. Echoing Huxley in Brave New World, Harris argues that “[o]ur addiction to social validation and bursts of ‘likes’ would continue to destroy our attention spans.” Echoing Orwell in Animal Farm, Harris is equally convinced that “[c]ontent algorithms would continue to drive us down rabbit holes toward extremism and conspiracy theories.” 

While technology’s distractions reduce our ability to act as autonomous beings, its impact on our primitive brains also “compromises our ability to take collective action” with others.

[O]ur Paleolithic brains aren’t build for omniscient awareness of the world’s suffering. Our online news feeds aggregate all the world’s pain and cruelty, dragging our brains into a kind of learned helplessness. Technology that provides us with near complete knowledge without a commensurate level of agency isn’t humane….Simply put, technology has outmatched our brains, diminishing our capacity to address the world’s most pressing challenges….The attention [or distraction] economy has turned us into a civilization maladapted for its own survival.

Harris argues that we’re overwhelmed by 24/7 genocide, oppression, environmental catastrophe and political chaos; we feel “helpless” in the face of the over-load; and our technology leaves us high-and-dry instead of providing us with the means (or the “agency”) to feel that we could ever make a difference. 
 
Harris’s essay describes technology’s assault on our autonomy—on our free will to act—but he never describes or provides scientific support for why our brain wiring is unable to resist that assault in the first place. It left me wondering: are all humans susceptible to distraction and manipulation from online technologies or just some of us, to some extent, some of the time? 
 
Harris heads an organization called the Center for Humane Tech, but its website (“Our mission is to reverse human downgrading by realigning technology with our humanity”) only scratches the surface of that question. 
 
For example, it links to a University of Chicago study involving the distraction that’s caused by smartphones we carry with us, even when they’re turned off. These particular researchers theorized that having these devices nearby “can reduce cognitive capacity by taxing the attentional resources that reside at the core of both working memory and fluid intelligence.”  In other words, we’re so preoccupied when our smartphones are around that our brain’s ability to process information is reduced. 
 
I couldn’t find additional research on the site, but I’m certain there was a broad body of knowledge fueling Edward O. Wilson’s concern, ten years ago, about the misalignment of our emotions, institutions and technology. It’s the state of today’s knowledge that could justify Harris’s alarm about what is happening when “our Paleolithic brains” confront “our godlike technologies,” and I’m sure he’s familiar with these findings.  But that research needs to be mustered and conclusions drawn from it so we can understand, as an impacted community, the risks that “our brains” actually face, and then determine together how to protect ourselves from it. 

To enable us to reach this capable place, science needs to rally (as it did in an open letter about artificial intelligence and has been doing on a daily basis to confront global warming) and make its best case about technology’s assault on human autonomy. 
 
If our civilization is truly “maladapted to its own survival,” we need to find our “agency” now before any more of it is lost. But we can only move beyond resignation when our sense of urgency arises from a well-understood (and much chewed-upon) base of knowledge. 

This post was adapted from my December 15, 2019 newsletter. When you subscribe, a new newsletter/post will be delivered to your inbox every Sunday morning.

Filed Under: *All Posts, Being Part of Something Bigger than Yourself, Daily Preparation Tagged With: agency, Aldous Huxley, autonomy, distraction, free will, George Orwell, human tech, humane technology, instincts, on-line addiction, technology, Tristan Harris

Democracy Collides With Technology in Smart Cities

July 1, 2019 By David Griesing Leave a Comment

There is a difference between new technology we’ve already adopted without thinking it through and new technology that we still have the chance to tame before its harms start overwhelming its benefits.
 
Think about Google, Facebook, Apple and Amazon with their now essential products and services. We fell in love with their whiz-bang conveniences so quickly that their innovations become a part of our lives before we recognized their downsides.  Unfortunately, now that they’ve gotten us hooked, it’s also become our problem (or our struggling regulators’ problem) to manage the harms caused by their products and services. 
 
-For Facebook and Google, those disruptions include surveillance dominated business models that compromise our privacy (and maybe our autonomy) when it comes to our consumer, political and social choices.
 
-For Apple, it’s the impact of constant smart phone distraction on young people whose brain power and ability to focus are still developing, and on the rest of us who look at our phones more than our partners, children or dogs.
 
-For these companies (along with Amazon), it’s also been the elimination of competitors, jobs and job-related community benefits without their upholding the other leg of the social contract, which is to give back to the economy they are profiting from by creating new jobs and benefits that can help us sustain flourishing communities.
 
Since we’ll never relinquish the conveniences these tech companies have brought, we’ll be struggling to limit their associated damages for a very long time. But a distinction is important here. 
 
The problem is not with these innovations but in how we adopted them. Their amazing advantages overwhelmed our ability as consumers to step back and see everything that we were getting into before we got hooked. Put another way, the capitalist imperative to profit quickly from transformative products and services overwhelmed the small number of visionaries who were trying to imagine for the rest of us where all of the alligators were lurking.
 
That is not the case with the new smart city initiatives that cities around the world have begun to explore. 
 
Burned and chastened, there was a critical mass of caution (as well as outrage) when Google affiliate Sidewalk Labs proposed a smart-city initiative in Toronto. Active and informed guardians of the social contract are actively negotiating with a profit-driven company like Sidewalk Labs to ensure that its innovations will also serve their city’s long- and short-term needs while minimizing the foreseeable harms.
 
Technology is only as good as the people who are managing it.

For the smart cities of the future, that means engaging everybody who could be benefitted as well as everybody who could be harmed long before these innovations “go live.” A fundamentally different value proposition becomes possible when democracy has enough time to collide with the prospects of powerful, life-changing technologies.

Irene Williams used remnants from football jerseys and shoulder pads to portray her local environs in Strip Quilt, 1960-69

1.         Smart Cities are Rational, Efficient and Human

I took a couple of hours off from work this week to visit a small exhibition of new arrivals at the Philadelphia Museum of Art. 
 
To the extent that I’ve collected anything over the years, it has been African art and textiles, mostly because locals had been collecting these artifacts for years, interesting and affordable items would come up for sale from time to time, I learned about the traditions behind the wood carvings or bark cloth I was drawn to, and gradually got hooked on their radically different ways of seeing the world. 
 
Some of those perspectives—particularly regarding reduction of familiar, natural forms to abstracted ones—extended into the homespun arts of the American South, particularly in the Mississippi Delta. 
 
A dozen or so years ago, quilts from rural Alabama communities like Gee’s Bend captured the art world’s attention, and my local museum just acquired some of these quilts along with other representational arts that came out of the former slave traditions in the American South. The picture at the top (of Loretta Pettway’s Roman Stripes Variation Quilt) and the others pictures here are from that new collection.
 
One echo in these quilts to smart cities is how they represent “maps” of their Delta communities, including rooflines, pathways and garden plots as a bird that was flying over, or even God, might see them. There is rationality—often a grid—but also local variation, points of human origination that are integral to their composition. As a uniquely American art form, these works can be read to combine the essential elements of a small community in boldly stylized ways. 
 
In their economy and how they incorporate their creator’s lived experiences, I don’t think that it’s too much of a stretch to say that they capture the essence of community that’s also coming into focus in smart city planning.
 
Earlier this year, I wrote about Toronto’s smart city initiative in two posts. The first was Whose Values Will Drive Our Future?–the citizens who will be most affected by smart city technologies or the tech companies that provide them. The second was The Human Purpose Behind Smart Cities. Each applauded Toronto for using cutting edge approaches to reclaim its Quayside neighborhood while also identifying some of the concerns that city leaders and residents will have to bear in mind for a community supported roll-out. 
 
For example, Robert Kitchin flagged seven “dangers” that haunt smart city plans as they’re drawn up and implemented. They are the dangers of taking a one-size-fits-all-cities approach; assuming the initiative is objective and “scientific” instead of biased; believing that complex social problems can be reduced to technology hurdles; having smart city technologies replacing key government functions as “cost savings” or otherwise; creating brittle and hackable tech systems that become impossible to maintain; being victimized as citizens by pervasive “dataveillance”; and reinforcing existing power structures and inequalities instead of improving social conditions.
 
Google’s Sidewalk Labs (“Sidewalk”) came out with its Master Innovation and Development Plan (“Plan”) for Toronto’s Quayside neighborhood this week. Unfortunately, against a rising crescendo of outrage over tech company surveillance and data privacy over the past 9 months, Sidewalk did a poor job of staying in front of the public relations curve by regularly consulting the community on its intentions. The result has been rising skepticism among Toronto’s leaders and citizens about whether Sidewalk can be trusted to deliver what it promised.
 
Toronto’s smart cities initiative is managed by an umbrella entity called Waterfront Toronto that was created by the city’s municipal, provincial and national governments. Sidewalk also has a stake in that entity, which has a high-powered board and several advisory boards with community representatives.

Last October one of those board members, Ann Cavoukian, who had recently been Ontario’s information and privacy commissioner, resigned in protest because she came to believe that Sidewalk was reneging on its promise to render all personal data anonymous immediately after it was collected. She worried that Sidewalk’s data collection technologies might identify people’s faces or license plates and potentially be used for corporate profit, despite Sidewalk’s public assurance that it would never market citizen-specific data. Cavoukian felt that leaving anonymity enforcement to a new and vaguely described “data trust” that Sidewald intended to propose was unacceptable and that other“[c]itizens in the area don’t feel that they’ve been consulted appropriately” about how their privacy would be protected either.
 
This April, a civil liberties coalition sued the three Canadian governments that created Waterfront Toronto over privacy concerns which appeared premature because Sidewalk’s actual Plan had yet to be submitted. When Sidewalk finally did so this week, the governments’ senior representative at Waterfront Toronto publically argued that the Plan goes “beyond the scope of the project initially proposed” by, among other things, including significantly more City property than was originally intended and “demanding” that the City’s existing transit network be extended to Quayside. 
 
Data privacy and surveillance concerns also persisted. A story this week about the Plan announcement and government push-back also included criticism that Sidewalk “is coloring outside the lines” by proposing a governance structure like “the data trust” to moderate privacy issues instead of leaving that issue to Waterfront Toronto’s government stakeholders. While Sidewalk said it welcomed this kind of back and forth, there is no denying that Toronto’s smart city dreams have lost a great deal of luster since they were first floated.
 
How might things have been different?
 
While it’s a longer story for another day, some years ago I was project lead on importing liquefied natural gas into Philadelphia’s port, an initiative that promised to bring over $1 billion in new revenues to the city. Unfortunately, while we were finalizing our plans with builders and suppliers, concerns that the Liberty Bell would be taken out by gas explosions (and other community reactions) were inadequately “ventilated,” depriving the project of key political sponsorship and weakening its chances for success. Other factors ultimately doomed this LNG project, but consistently building support for a project that concerned the commmunity certainly contributed. Despite Sidewalk’s having a vaunted community consensus builder in Dan Doctoroff at its helm, Sidewalk (and Google) appear to be fumbling this same ball in Toronto today.
 
My experience, along with Doctoroff’s and others, go some distance towards proving why profit-oriented companies are singularly ill-suited to take the lead on transformative, community-impacting projects. Why?  Because it’s so difficut to justify financially the years of discussions and consensus building that are necessary before an implementation plan can even be drafted. Capitalism is efficient and “economical” but democracy, well, it’s far less so.
 
Argued another way, if I’d had the time and funding to build a city-wide consensus around how significant new LNG revenues would benefit Philadelphia’s residents before the financial deals for supply, construction and distribution were being struck, there could have been powerful civic support built for the project and the problems that ultimately ended it might never have materialized. 
 
This anecdotal evidence from Toronto and Philadelphia begs some serious questions: 
 
-Should any technology that promises to transform people’s lives in fundamental ways (like smart cities or smart phones) be “held in abeyance” from the marketplace until its impacts can be debated and necessary safeguards put in place?
 
-Might a mandated “quiet period“ (like that imposed by regulators in the months before public stock offerings) be better than leaving tech companies to bomb us with seductive products that make them richer but many of us poorer because we never had a chance to consider the fall-out from these products beforehand?
 
-Should the economic model that brings technological innovations with these kinds of impacts to market be fundamentally changed to accommodate advance opportunities for the rest of us to learn what the necessary questions are, ask them and consider the answers we receive?

Mama’s Song, Mary Lee Bendolph

3.         An Unintended but Better Way With Self-Driving Cars

I can’t answer these questions today, but surely they’re worth asking and returning to.
 
Instead, I’m recalling some of the data that is being accumulated today about self-driving/autonomous car technology so that the impacted communities will have made at least some of their moral and other preferences clear long before this transformative technology has been brought to market and seduced us into dependency upon it. As noted in a post from last November:

One way to help determine what the future should look like and how it should operate is to ask people—lots of them—what they’d like to see and what they’re concerned about…In the so-called Moral Machine Experiment, these researchers asked people around the world for their preferences regarding the moral choices that autonomous cars will be called upon to make so that this new technology can match human values as well as its developer’s profit motives.

For example, if a self-driving car has to choose between hitting one person in its way or another, should it be the 6-year old or the 60-year old? People in different parts of the world would make different choices and it takes sustained investments of time and effort to gather those viewpoints.

If peoples’ moral preferences can be taken into account beforehand, the public might be able to recognize “the human face” in a new technology from the beginning instead of having to attempt damage control once that technology is in use.

Public advocates, like those in Toronto who filed suit in April, and the other Cassandras identifying potential problems also deserve a hearing.  Every transformative project’s (or product’s or service’s) dissenters as well as its proponents need opportunities to persuade those who have yet to make up their minds about whether the project is good for them before it’s on the runway or already taken off. 

Following their commentary and grappling with their concerns removes some of the dazzle in our [initial] hopes and grounds them more firmly in reality early on.

Unlike the smart city technology that Sidewalk Labs already has for Toronto, it’s only recently become clear that the artificial intelligence systems behind autonomous vehicles are unable to make the kinds of decisions that “take into mind” a community’s moral preferences. In effect, the rush towards implementation of this disruptive technology was stalled by problems with the technology itself. But this kind of pause is the exception not the rule. The rush to market and its associated profits are powerful, making “breathers to become smarter” before product launches like this uncommon.
 
Once again, we need to consider whether such public ventilation periods should be imposed. 
 
Is there any better way to aim for the community balance between rationality and efficiency on the one hand, human variation and need on the other, that was captured by some visionary artists from the Mississippi delta?
 

+ + + 


Next week, I’m thinking about a follow-up post on smart cities that uses the “seven dangers” discussed above as a springboard for the necessary follow-up questions that Torontonians (along with the rest of us) should be asking and debating now as the tech companies aim to bring us smarter and better cities. In that regard, I’d be grateful for your thoughts on how innovation can advance when democracy gets involved.

This post was adapted from my June 30, 2019 newsletter. When you subscribe, a new newsletter/post will be delivered to your inbox every Sunday morning.

Filed Under: *All Posts, Being Part of Something Bigger than Yourself, Continuous Learning Tagged With: capitalism, community outreach, democracy, dissent, Gees Bend quilts, Google, innovation, Quayside, Sidewalk Labs, smart cities, technology, tension between capitalism and democracy, Toronto, transformative technology

The Human Purpose Behind Smart Cities

March 24, 2019 By David Griesing Leave a Comment

It is human priorities that should be driving Smart City initiatives, like the ones in Toronto profiled here last week. 

Last week’s post also focused on a pioneering spirit in Toronto that many American cities and towns seem to have lost. While we entrench in the moral righteousness of our sides in the debate—including, for many, a distrust of collective governance, regulation and taxation—we drift towards an uncertain future instead of claiming one that can be built on values we actually share. 

In its King Street and Quayside initiatives, Toronto is actively experimenting with the future it wants based on its residents’ commitment to sustaining their natural environment in the face of urban life’s often toxic impacts.  They’re conducting these experiments in a relatively civil, collaborative and productive way—an urban role model for places that seem to have forgotten how to work together. Toronto’s bold experiments are also utilizing “smart” technologies in their on-going attempts to “optimize” living and working in new, experimental communities.

During a short trip this week, I got to see the leading edges of New York City’s new Hudson Yards community (spread over 28 acres with an estimated $25 billion price tag) and couldn’t help being struck by how much it catered to those seeking more luxury living, shopping and workspaces than Manhattan already affords. In other words, how much it could have been a bold experiment about new ways that all of its citizens might live and work in America’s first city for the next half-century, but how little it actually was. A hundred years ago, one of the largest immigrant migrations in history made New York City the envy of the world. With half of its current citizens being foreign-born, perhaps the next century, unfurling today, belongs to newer cities like Toronto.

Still, even with its laudable ambition, it will not be easy for Toronto and other future-facing communities to get their Smart City initiatives right, as several of you were also quick to remind me last week. Here is a complaint from a King Street merchant that one of you (thanks Josh!) found and forwarded that seems to cast what is happening in Toronto in a less favorable light than I had focused upon it:

What a wonderful story. But as with [all of] these wonderful plans some seem to be forgotten. As it appears are the actual merchants. Google certainly a big winner here. Below an excerpt written by one of the merchants:
   
‘The City of Toronto has chosen the worst time, in the worst way, in the worst season to implement the pilot project. Their goal is clearly to move people through King St., not to King St. For years King St. was a destination, now it is a thoroughfare.
 
‘The goal of the King St. Pilot project was said to be to balance three important principles: to move people more effectively on transit, to support business and economic prosperity and to improve public space. In its current form, the competing principles seem to be decidedly tilted away from the economic well-being of merchants and biases efficiency over convenience. The casual stickiness of pedestrians walking and stopping at stores, restaurants and other merchants is lost.
 
‘Additionally, the [transit authority] TTC has eliminated a number of stops along King St., forcing passengers to walk further to enter and disembark streetcars, further reducing pedestrian traffic and affecting areas businesses. The TTC appears to believe that if they didn’t have to pick up and drop off people, they could run their system more effectively.
 
‘The dubious benefits of faster street car traffic on King St. notwithstanding, the collateral damage of the increased traffic of the more than 20,000 cars the TTC alleges are displaced from King St to adjoining streets has turned Adelaide, Queen, Wellington and Front Sts. into a gridlock standstill. Anyone who has tried to navigate the area can attest that much of the time, no matter how close you are you can’t get there from here.
 
‘Along with the other merchants of King St. and the Toronto Entertainment District we ask that Mayor Tory and Toronto council to consider a simple, reasonable and cost-effective alternative. Put lights on King St. that restrict vehicle traffic during rush hours, but return King St. to its former vibrant self after 7 p.m., on weekends and statutory holidays. It’s smart, fair, reasonable and helps meet the goals of the King St. pilot project. 

Two things about this complaint seemed noteworthy. The first is how civil and constructive this criticism is in a process that hopes to “iterate” as real time impacts are assessed. It’s a tribute that Toronto’s experiments not only invite but are also receiving feedback like this. Alas, the second take-away from Josh’s comment is far more nettlesome. “[However many losers there may be along the way:] Google certainly a big winner here.”

The tech giant’s partnership with Canada’s governments in Toronto raises a constellation of challenging issues, but it’s useful to recall that pioneers who dare to claim new frontiers always do so with the best technology that’s available. While the settling of the American West involved significant collateral damage (to Native Americans and Chinese migrants, to the buffalo and the land itself), it would not have been possible without existing innovations and new ones that these pioneers fashioned along the way. Think of the railroads, the telegraph poles, even something as low-tech as the barbed wire that was used to contain livestock. 

The problem isn’t human and corporate greed or heartless technology—we know about them already—but failing to recognize and reduce their harmful impacts before it is too late. The objective for pioneers on new frontiers should always be maximizing the benefits while minimizing the harms that can be foreseen from the very beginning instead of looking back with anger after the damage is done.

We have that opportunity with Smart City initiatives today.

Because they concentrate many of the choices that will have to be made when we boldly dare to claim the future of America again, I’ve been looking for a roadmap through the moral thicket in the books and articles that are being written about these initiatives today. Here are some of the markers that I’ve discovered.

Human priorities, realized with the help of technology

1.         Markers on the Road to Smarter and More Vibrant Communities

The following insights come almost entirely from a short article by Robert Kitchin, a professor at Maynooth University in Ireland. In my review of the on-going conversation about Smart Cities, I found him to be one of its most helpful observers.  

In his article, Kitchin discusses the three principal ways that smart cities are understood, the key promises smart initiatives make to stakeholders, and the perils to be avoided around these promises.

Perhaps not surprisingly, people envision cities and other communities “getting smarter” in different ways. One constituency sees an opportunity to improve both “urban regulation and governance through instrumentation and data-driven systems”–essentially, a management tool. A bolder and more transformative vision sees information and communication technology “re-configur[ing] human capital, creativity, innovation, education, sustainability, and management,” thereby “produc[ing] smarter citizens, workers and public servants” who “can enact polic[ies], produce better products… foster indigenous entrepreneurship and attract inward investment.” The first makes the frontier operate more efficiently while the second improves nearly every corner of it.

The third Smart City vision is “a counter-weight or alternative” to each of them. It wants these technologies “to promote a citizen-centric model of development that fosters social innovation and social justice, civic engagement and hactivism, and transparent and accountable governance.” In this model, technology serves social objectives like greater equality and fairness. Kitchin reminds us that these three visions are not mutually exclusive. It seems to me that the priorities embedded in a community’s vision of a “smarter” future could include elements of each of them, functioning like checks and balances, in tension with one another. 

Smart City initiatives promise to solve pressing urban problems, including poor economic performance; government dysfunction; constrained mobility; environmental degradation; a declining quality of life, including risks to safety and security; and a disengaged, unproductive citizen base. Writes Kitchin:

the smart city promises to solve a fundamental conundrum of cities – how to reduce costs and create economic growth and resilience at the same time as producing sustainability and improving services, participation and quality of life – and to do so in commonsensical, pragmatic, neutral and apolitical ways.

Once again, it’s a delicate balancing act with a range of countervailing interests and constituencies, as you can see in the chart from a related discussion above.
 
The perils of Smart Cities should never overwhelm their promise in my view, but urban pioneers should always have them in mind (from planning through implementation) because some perils only manifest themselves over time. According to Kitchin, the seven dangers in pursuing these initiatives include:
 
–taking “a ‘one size fits all’ approach, treating cities as generic markets and solutions [that are] straightforwardly scalable and movable”;
 
–assuming that initiatives are “objective and non-ideological, grounded in either science or commonsense.” You can aim for these ideals, but human and organizational preferences and biases will always be embedded within them.
 
–believing that the complex social problems in communities can be reduced to “neatly defined technical problems” that smart technology can also solve. The ways that citizens have always framed and resolved their community problems cannot be automated so easily. (This is also the thrust of Ben Green’s Smart Enough City: Putting Technology in Its Place to Reclaim Our Urban Future, which will be published by MIT Press in April. In it he argues for “smart enough alternatives” that are attainable with the help of technology but never reducible to technology solutions alone.)
 
–engaging with corporations that are using smart city technologies “to capture government functions as new market opportunities.” One risk of a company like Google to communities like Toronto’s is that Google might lock Toronto in to its proprietary technologies and vendors over a long period of time or use Toronto’s citizen data to gain business opportunities in other cities.
 
–becoming straddled with “buggy, brittle and hackable” systems that are ever more “complicated, interconnected and dependent on software” while becoming more resistant to manual fixes.
 
–becoming victimized by “pervasive dataveillance that erodes privacy” through practices like “algorithmic social sorting (whether people get a loan, a tenancy, a job, etc), dynamic pricing (whereby different people pay varying prices depending on their perceived customer value) and anticipatory governance using predictive profiling (wherein data precedes how a person is policed and governed).” Earlier this month, my post on popular on-line games like Fortnite highlighted the additional risk that invasive technologies can use the data they are gathering to change peoples’ behavior.
 
-and lastly, reinforcing existing power structures and inequalities instead of eroding or reconfiguring them.
 
While acknowledging the promise of Smart Cities at their best, Kitchin closes his article with this cautionary note:

the realities of implementation are messier and more complex than the marketing hype of corporations or city managers portray and there are a number of social, political, ethical and legal concerns with respect to the kind of society smart city initiatives seek to create.  As such, whilst networked urbanism has benefits, it also poses challenges and risks that are often little explored or legislated for ahead of implementation. Indeed, the pace of development and rollout of smart city technologies is proceeding well ahead of wider reflection, critique and regulation.

Putting the cart before a suitably-designed horse is a problem with all new and seductive technologies that get embraced before their harms are identified or can be addressed—a quandary that was also considered here in a post called “Looking Out for the Human Side of Technology.”

2.         The Value of Our Data

A few additional considerations about the Smart City are also worth bearing in mind as debate about these initiatives intensifies.

In a March 8, 2019 post, Kurtis McBride wrote about two different ways “to value” the data that these initiatives will produce, and his distinction is an important one. It’s a discussion that citizens, government officials and tech companies should be having, but unfortunately are not having as much as they need to.

When Smart City data is free to everyone, there is the risk that the multinationals generating it will merely use it to increase their power and profits in the growing market for Smart City technologies and services. From the residents’ perspective, McBride argues that it’s “reasonable for citizens to expect to see benefit” from their data, while noting that these same citizens will also be paying dearly for smart upgrades to their communities. His proposal on valuing citizen data depends on how it will be used by tech companies like Google or local service providers. For example, if citizen data is used:

to map the safest and fastest routes for cyclists across the city and offers that information free to all citizens, [the tech company] is providing citizen benefit and should be able to access the needed smart city data free of charge. 
 
But, if a courier company uses real-time traffic data to optimize their routes, improving their productivity and profit margins – there is no broad citizen benefit. In those cases, I think it’s fair to ask those organizations to pay to access the needed city data, providing a revenue stream cities can then use to improve city services for all. 

Applying McBride’s reasoning, an impartial body in a city like Toronto would need to decide whether Google has to pay for data generated in its Quayside community by consulting a benefit-to-citizens standard. Clearly, if Google wanted to use Quayside data in a Smart City initiative in say Colorado or California, it would need to pay Toronto for the use of its citizens’ information.
 
Of course, addressing the imbalance between those (like us) who provide the data and the tech companies that use it to increase their profits and influence is not just a problem for Smart City initiatives, and changing the “value proposition” around our data is surely part of the solution. In her new book Age of Surveillance Capitalism: the Fight for a Human Future in the New Frontier of Power, Harvard Business School’s Shoshana Zuboff says that “you’re the product if these companies aren’t paying you for your data” does not state the case powerfully enough. She argues that the big tech platforms are like elephant poachers and our personal data like those elephants’ ivory tusks. “You are not the product,” she writes. “You are the abandoned carcass.”
 
Smart City initiatives also provide a way to think about “the value of our data” in the context of our living and working and not merely as the gateway to more convenient shopping, more addictive gaming experiences or  “free” search engines like Googles’.

This post is adapted from my March 24, 2019 newsletter. Subscribe today and receive an email copy of future posts in your inbox each week.

Filed Under: *All Posts, Being Part of Something Bigger than Yourself, Building Your Values into Your Work, Entrepreneurship, Work & Life Rewards Tagged With: entrepreneurship, ethics, frontier, future of cities, future of work, Google, Hudson Yards, innovation, King Street, pioneer, priorities, Quayside, Robert Kitchin, smart cities, Smart City, smart city initiatives, technology, Toronto, urban planning, value of personal data, values

  • 1
  • 2
  • 3
  • 4
  • Next Page »

About David

David Griesing (@worklifeward) writes from Philadelphia.

Read More →

David Griesing Twitter @worklifereward

Subscribe to my Newsletter

Join all the others who have new posts, recommendations and links to explore delivered to their inboxes every week. You can read all published newsletters via the Index on the Subscribe Page.

My Forthcoming Book

WordLifeReward Book

Writings

  • *All Posts (215)
  • Being Part of Something Bigger than Yourself (106)
  • Being Proud of Your Work (33)
  • Building Your Values into Your Work (83)
  • Continuous Learning (74)
  • Daily Preparation (52)
  • Entrepreneurship (30)
  • Heroes & Other Role Models (40)
  • Introducing Yourself & Your Work (23)
  • The Op-eds (4)
  • Using Humor Effectively (14)
  • Work & Life Rewards (72)

Archives

Search this Site

Follow Me

David Griesing Twitter @worklifereward

Recent Posts

  • An Artist Needs to Write Us a Better Story About the Future March 9, 2023
  • Patagonia’s Rock Climber February 19, 2023
  • We May Be In a Neurological Mismatch with Our Tech-Driven World January 29, 2023
  • Reading Last Year and This Year January 12, 2023
  • A Time for Repair, for Wintering  December 13, 2022

Navigate

  • About
    • Biography
    • Teaching and Training
  • Blog
  • Book
    • WorkLifeReward
  • Contact
  • Privacy Policy
  • Subscribe to my Newsletter
  • Terms of Use

Copyright © 2023 David Griesing. All Rights Reserved.

  • Terms of Use
  • Privacy Policy