David Griesing | Work Life Reward Author | Philadelphia

  • Blog
  • About
    • Biography
    • Teaching and Training
  • Book
    • WorkLifeReward
  • Newsletter Archive
  • Contact
You are here: Home / Archives for democracy

Making Technology Serve Democracy

October 2, 2024 By David Griesing 1 Comment

I got my mail-in ballot for November’s U.S. election yesterday, and plan to vote tomorrow. 

For the first time in my voting life, I’ve been following little of the on-going campaign–beyond reviewing the Harris economic plan (detail that has probably come too late for most voters in a truncated election cycle) and wondering about her objectives for the war in Ukraine (is she for setting Russia back or accommodating it?) Stifiling my interest further has been the ominous sense that whomever actually wins in a few weeks, the result will be so close that we’ll still be fighting about it in the courts and on our streets come January.

So instead of wallowing in here-we-go-again or what these divisions might mean for America’s commitments to the rest of the world, I’ve been diving into the work of two visionaries and some of their proposed solutions to the current grid-locks besetting democracy—E. Glen Weyl, an economist at Microsoft Research, and Audrey Tang, Taiwan’s Digital Technology Minister. For some years now, Weyl and Tang have been evangelists in the quest to use our digital technologies to bolster the ways that we sort through our differences and improve our governance in democratic countries. 

I start by agreeing with Weyl, Tang and many many others that innovations like social networks and AI (along with blockchains and digital currencies) have largely been deployed to maximize private profits instead of to benefit the wider public over the past 25 years. The conclusion seems inescapable that these skewed priorities have contributed to our feelings of helplessness about what-comes-next and the shape of our futures more generally.

But with Weyl and Tang, I also believe that we can use these same digital innovations in ways that promote the kinds of conversations and consensus-building that are necessary for functioning democracies. Indeed, doing so has already enabled a few fortunate governments (like Taiwan’s) to manage crises like the coronavirus pandemic with greater unity and far, far fewer “casualties” than almost anywhere else on earth.

Tang was instrumental in Taiwan’s effort, and in light of it she joined with Weyl and more than 100 other on-line collaborators to co-author a primer on how our digital technologies can be deployed to support democratic processes and reduce our political divides. It’s called “Plurality: The Future of Collaborative Technology and Democracy.”

My aim today is to describe some of Plurality’s proposals and (via several links) point you in the direction of the wider discussion that these visionaries are hosting.

Weighing possible solutions seems a healthier way to spend one’s time these days than dreading the slow-motion trainwreck that seems likely to recur in America over the next few months.

Before the preview of coming attractions that Audrey Tang contributed to in Taiwan, a few words that might be necessary about the Taiwanese. 

Westerners sometimes harbor the view that the Taiwanese people are prone to harmony than divisiveness—or what Tang laughingly characterizes “as acting like Confusius robots”—but in reality they govern themselves very differently. The primary political and social divides in Taiwan are over whether to accommodate China’s various threats to its sovereignty or to resist them. But there are myriad, leser divides that beset this restlessly modern nation, and one or more of them could easily have produced a horrible result when its population was challenged by the coronavirus a few years back.

Instead, Taiwan already had some meaningful experience using digital access to provide greater citizen engagement in how the nation solved problems and responded to threats. According to an article in Time called “Taiwan’s Digital Minister Has an Ambitious Plan to Align Tech With Democracy,” after the country’s martial law era that ended in 1987, it’s citizens embraced computers and internet access enthusiastically because they enabled them to publish books without state sponsorship and communicate without state surveillance. According to Time, it was feelings of liberation assisted by technology that also fueled:

the rise of the g0v (gov zero) movement in 2012, led by civic hackers who wanted to increase transparency and participation in public affairs. The movement started by creating superior versions of government websites, which they hosted on .g0v.tw domains instead of the official .gov.tw, often attracting more traffic than their governmental counterparts. The g0v movement has since launched more initiatives that seek to use technology to empower Taiwanese citizens, such as vTaiwan, a platform that facilitates public discussion and collaborative policymaking between citizens, experts, and government officials.

For example, these gov-zero improvements proved instrumental when Uber launched its car service in Taiwan, sparking a powerful backlash. Tang and Weyl recalled what transpired next in a post that announced their Plurality concept: 

When Uber arrived in Taiwan, its presence was divisive, just as it has been in much of the world. But rather than social media pouring fuel on this flame, the vTaiwan platform that one of us developed as a minister there empowered citizens opining on the issue to have a thoughtful, deliberative conversation with thousands of participants on how ride hailing should be regulated. This technology harnessed statistical tools often associated with AI to cluster opinion, allowing every participant to quickly digest the clearest articulation of the viewpoints of their fellow citizens and contribute back their own thoughts. The views that drew support from across the initial lines of division rose to the top, forming a rough consensus that ensured the benefits of the new ride hailing tools while also protecting workers’ rights and was implemented by the government.

In 2016, when Taiwan faced mass protests over an impending trade deal with China, Tang again played an instrumental role during protestors’ 24-day occupation of the country’s legislative chamber by enabling the protestors to peacefully boardcast their views on digital platforms and avoid a longer crisis. Shortly thereafter, Tang was appointed Taiwan’s digital minister without portfolio, in 2022 she became her country’s first Minister for Digital Affairs, and last year was appointed board chair of Taiwan’s Institute of Cyber Security.

The formal appointments in 2022 and 2023 followed Tang’s assistance throughout the pandemic using “pro-social” instead of “anti-social” digital media, which she described in an interview on the TED talks platform as being “fast, fair and fun” approaches to what could easily have become a country-wide calamity.

When word first came from China about a “SARs like” viral outbreak in Wuhan, Taiwan quickly implemented quarantine protocols at all points of entry, while simultaneously insuring that there were enough “quarantine hotels” to stop the spread before it could start.

Fairness via digital access and rapid dissemination of information, about say medical mask availability, was also critical to maintaining calm during those early pandemic months. As Tang recounted:

[N]ot only do we publish the stock level of masks of all pharmacies, 6,000 of them, we publish it every 30 seconds. That’s why our civic hackers, our civil engineers in the digital space, built more than 100 tools that enable[d] people to view a map, or people with blindness who talk to chat bots, voice assistants, all of them can get the same inclusive access to information about which pharmacies near them still have masks.

Taiwan’s rapid challenges to unfounded rumors before they had the chance to spread included another key element:  the effectiveness of viral humor as a antidote to panic buying and similar anxiety-driven behaviors. Here’s Tang again:

[I]n Taiwan, our counter-disinformation strategy is very simple. It’s called ‘humor over rumor.’ So when there was a panic buying of [toilet] tissue paper, for example, there was a rumor [circulating] that says, ‘Oh, we’re ramping up mass production, masks use the same material as [toilet] tissue papers, and so we’ll run out of [toilet] tissue soon.’ [So to counter the rumor] our premier digitally shared a very memetic picture that I simply have to share with you. He shows his bottom, wiggling it a little bit, and then the large print says ‘Each of us only have one pair of buttocks.’ And of course, the serious table [that he also shared] shows that tissue paper came from South American materials, and medical masks come from domestic materials, and there’s no way that ramping up production of one will hurt the production of the other. And so that went absolutely viral. And because of that, the panic buying died down in a day or two. And finally, we found out the person who spread the rumor in the first place was the tissue paper reseller.

Through the use of digital tactics and strategies like these, Taiwan got fairly deep into the pandemic before it reported a single case of the coronavirus among the locals. In many ways that was because, as Time reported, “Taiwan leads the world in digital democracy.”  It not only shares vital information with its citizens in a timely and engaging format, it consistently provides them with digital access to their government so that issues of public interest can be debated and often resolved.

Notwithstanding this momentum, in Plurality Tang and Weyl foresee even greater public benefit when democratic processes are more closely aligned with technology.

Some of these pro-social benefits involve counteracting the most anti-social effects of artificial intelligence (AI), blockchains and crypto-currencies when they introduce disruptions into the democratic conversation. As reported in the Time article: 

Plurality argues that each of these [technological innovations] are undermining democracy in different, but equally pernicious ways. AI systems facilitate top-down control, empowering authoritarian regimes and unresponsive technocratic governments in ostensibly democratic countries. Meanwhile, blockchain-based technologies [like crypto-currencies] atomize societies and accelerate financial capitalism, eroding democracy from below. As Peter Thiel, billionaire entrepreneur and investor, put it in 2018: ‘crypto is libertarian and AI is communist.”

To elaborate on the substance of these threats a bit, it’s clear that AI’s ability to muster and re-direct vast amounts of information gives governments with anti-democratic tendencies the ability to manage (if not control) their citizens. Moreover, it is block-chains’ and crypto currencies’ ability to shelter transactions (if not entire markets) from regulatory control that can undermine a country’s ability to “conduct business” in ways that serve the interests of its citizens. Tang and Weyl argue that more robust digital democracies can help to resist these “pernicious” effects in myriad ways.

But these are just the defensive advantages; there is also a better world that they’d like to build with digital building blocks. As Tang and Weyl described it while announcing the Plurality concept and book, what has already been accomplished in Taiwan’s digital democracy: 

just scratches the surface of how technology can be designed to perceive, honor and bridge social differences for collaboration. New voting and financing rules emerging from the Ethereum ecosystem [which also relies on blockchain technology] can reshape how we govern the public and private sectors; immersive virtual worlds are empowering empathetic connections that cross lines of social exclusion; social networks and newsfeeds can be engineered to build social cohesion and shared sensemaking, rather than driving us apart.

From where I sit this morning, I can stew in the bile and trepidation of America’s current election cycle or try to conjure a better future beyond the digital mosh pit of Twitter/X and much that appears on our news screens every day. 

Tang, Weyl and Plurality are providing a platform for reinvigorating a democracy like ours by aligning it with digital technologies that can be put to much better uses than we’ve managed until now.

Tomorrow, I’d rather be voting for a robust future that our tech could enable if only we wanted it to.

+ + +

In line with two recommendations in Plurality, previous postings here have considered how community theater and a virtual reality headset can foster both engagement and empathy around issues like policing and homelessness (“We Find Where We Stand in the Space Between Differing Perspectives”) and how to guide the future of AI with a public-spirited “moon-shot mentality” instead of leaving its roll-out (as we seem to be doing today) to “free market forces” (“Will We Domesticate AI in Time?”).

This post was adapted from my September 29, 2024 newsletter. Newsletters are delivered to subscribers’ in-boxes every Sunday morning, and sometimes I post the content from one of them here in lightly edited form. You can subscribe by leaving your email address in the column to the right.

Filed Under: *All Posts, Being Part of Something Bigger than Yourself, Building Your Values into Your Work Tagged With: Audrey Tang, collaborative technology, democracy, E Glen Weyl, gov zero, plurality, plurality book, technology, technology aligned with democracy, technology supporting democracy

Citizens Will Decide What’s Important in Smart Cities

July 8, 2019 By David Griesing Leave a Comment

The norms that dictate the acceptable use of artificial intelligence in technology are in flux. That’s partly because the AI-enabled, personal data gathering by companies like Google, Facebook and Amazon has caused a spirited debate about the right of privacy that individuals have over their personal information. With your “behavioral” data, the tech giants can target you with specific products, influence your political views, manipulate you into spending more time on their platforms, and weaken the control that you have over your own decision-making.
 
In most of the debate about the harms of these platforms thus far, our privacy rights have been poorly understood.  In fact, our anything-but-clear commitments to the integrity of our personal information have enabled these tech giants to overwhelm our initial, instinctive caution as they seduced us into believing that “free” searches, social networks or next day deliveries might be worth giving them our personal data in return. Moreover, what alternatives did we have to the exchange they were offering?

  • Where were the privacy-protecting search engines, social networks and on-line shopping hubs?
  • Moreover, once we got hooked on to these data-sucking platforms, wasn’t it already too late to “put the ketchup back in the bottle” where our private information was concerned? Don’t these companies (and the data brokers that enrich them) already have everything that they need to know about us?

Overwhelmed by the draw of  “free” services from these tech giants, we never bothered to define the scope of the privacy rights that we relinquished when we accepted their “terms of service.”  Now, several years into this brave new world of surveillance and manipulation, many feel that it’s already too late to do anything, and even if it weren’t, we are hardly willing to relinquish the advantages of these platforms when they are unavailable elsewhere. 
 
So is there really “no way out”?  
 
A rising crescendo of voices is gradually finding a way, and they are coming at it from several different directions.
 
In places like Toronto (London, Helsinki, Chicago and Barcelona) policy makers and citizens alike are defining the norms around personal data privacy at the same time that they’re grappling with the potential fallout of similar data-tracking, analyzing and decision-making technologies in smart-city initiatives.
 
Our first stop today is to eavesdrop on how these cities are grappling with both the advantages and harms of smart-city technologies, and how we’re all learning—from the host of scenarios they’re considering—why it makes sense to shield our personal data from those who seek to profit from it.  The rising debate around smart-city initiatives is giving us new perspectives on how surveillance-based technologies are likely to impact our daily lives and work. As the risks to our privacy are played out in new, easy-to-imagine contexts, more of us will become more willing to protect our personal information from those who could turn it against us in the future.
 
How and why norms change (and even explode) during civic conversations like this is a topic that Cass Sunstein explores in his new book How Change Happens. Sunstein considers the personal impacts when norms involving issues like data privacy are in flux, and the role that understanding other people’s priorities always seems to play. Some of his conclusions are also discussed below. As “dataveillance” is increasingly challenged and we contextualize our privacy interests even further, the smart-city debate is likely to usher in a more durable norm regarding data privacy while, at the same time, allowing us to realize the benefits of AI-driven technologies that can improve urban efficiency, convenience and quality of life.
 
With the growing certainty that our personal privacy rights are worth protecting, it is perhaps no coincidence that there are new companies on the horizon that promise to provide access to the on-line services we’ve come to expect without our having to pay an unacceptable price for them.  Next week, I’ll be sharing perhaps the most promising of these new business models with you as we begin to imagine a future that safeguards instead of exploits our personal information. 

1.         Smart-City Debates Are Telling Us Why Our Personal Data Needs Protecting

Over the past 6 months, I’ve talked repeatedly about smart-city technologies and one of you reached out to me this week wondering:  “What (exactly) are these new “technologies”?”  (Thanks for your question, George!).  
 
As a general matter, smart-city technologies gather and analyze information about how a city functions, while improving urban decision-making around that new information. Throughout, these data-gathering,  analyzing, and decision-making processes rely on artificial intelligence. In his recent article “What Would It Take to Help Cities Innovate Responsibly With AI?” Eddie Copeland begins by describing the many useful things that AI enables us to do in this context: 

AI can codify [a] best practice and roll it out at scale, remove human bias, enable evidence-based decision making in the field, spot patterns that humans can’t see, optimise systems too complex for humans to model, quickly digest and interpret vast quantities of data and automate demanding cognitive activities.

In other words, in a broad range of urban contexts, a smart-city system with AI capabilities can make progressively better decisions about nearly every aspect of a city’s operations by gaining an increasingly refined understanding of how its citizens use the city and are, in turn, served by its managers.
 
Of course, the potential benefits of greater or more equitable access to city services as well as their optimized delivery are enormous. Despite some of the current hew and cry, a smart-cities future does not have to resemble Big Brother. Instead, it could liberate time and money that’s currently being wasted, permitting their reinvestment into areas that produce a wider variety of benefits to citizens at every level of government.
 
Over the past weeks and months, I’ve been extolling the optimism that drove Toronto to launch its smart-cities initiative called Quayside and how its debate has entered a stormy patch more recently. Amidst the finger pointing among Google affiliate Sidewalk Labs, government leaders and civil rights advocates, Sidewalk (which is providing the AI-driven tech interface) has consistently stated that no citizen-specific data it collects will be sold, but the devil (as they say) remains in the as-yet to be disclosed details. This is from a statement the company issued in April:

Sidewalk Labs is strongly committed to the protection and privacy of urban data. In fact, we’ve been clear in our belief that decisions about the collection and use of urban data should be up to an independent data trust, which we are proposing for the Quayside project. This organization would be run by an independent third party in partnership with the government and ensure urban data is only used in ways that benefit the community, protect privacy, and spur innovation and investment. This independent body would have full oversight over Quayside. Sidewalk Labs fully supports a robust and healthy discussion regarding privacy, data ownership, and governance. But this debate must be rooted in fact, not fiction and fear-mongering.

As a result of experiences like Toronto’s (and many others, where a new technology is introduced to unsuspecting users), I argued in last week’s post for longer “public ventilation periods” to understand the risks as well as rewards before potentially transformative products are launched and actually used by the public.
 
In the meantime, other cities have also been engaging their citizens in just this kind of information-sharing and debate. Last week, a piece in the New York Times elaborated on citizen-oriented initiatives in Chicago and Barcelona after noting that:

[t]he way to create cities that everyone can traverse without fear of surveillance and exploitation is to democratize the development and control of smart city technology.

While Chicago was developing a project to install hundreds of sensors throughout the city to track air quality, traffic and temperature, it also held public meetings and released policy drafts to promote a City-wide discussion on how to protect personal privacy. According to the Times, this exchange shaped policies that reduced, among other things, the amount of footage that monitoring cameras retained. For its part, Barcelona has modified its municipal procurement contracts with smart cities technology vendors to announce its intentions up front about the public’s ownership and control of personal data.
 
Earlier this year, London and Helsinki announced a collaboration that would enable them to share “best practices and expertise” as they develop their own smart-city systems. A statement by one driver of this collaboration, Smart London, provides the rationale for a robust public exchange:

The successful application of AI in cities relies on the confidence of the citizens it serves.
 
Decisions made by city governments will often be weightier than those in the consumer sphere, and the consequences of those decisions will often have a deep impact on citizens’ lives.
 
Fundamentally, cities operate under a democratic mandate, so the use of technology in public services should operate under the same principles of accountability, transparency and citizens’ rights and safety — just as in other work we do.

To create “an ethical framework for public servants and [a] line-of-sight for the city leaders,” Smart London proposed that citizens, subject matter experts, and civic leaders should all ask and vigorously debate the answers to the following 10 questions:

  • Objective– why is the AI needed and what outcomes is it intended to enable?
  • Use– in what processes and circumstances is the AI appropriate to be used?
  • Impacts– what impacts, good and bad, could the use of AI have on people?
  • Assumptions– what assumptions is the AI based on, and what are their iterations and potential biases?
  •  Data– what data is/was the AI trained on and what are their iterations and potential biases?
  • Inputs– what new data does the AI use when making decisions?
  • Mitigation– what actions have been taken to regulate the negative impacts that could result from the AI’s limitations and potential biases?
  • Ethics– what assessment has been made of the ethics of using this AI? In other words, does the AI serve important, citizen-driven needs as we currently understand those priorities?
  • Oversight– what human judgment is needed before acting on the AI’s output and who is responsible for ensuring its proper use?
  • Evaluation– how and by what criteria will the effectiveness of the AI in this smart-city system be assessed and by whom?

As stakeholders debate these questions and answers, smart-city technologies with broad-based support will be implemented while citizens gain a greater appreciation of the privacy boundaries they are protecting.
 
Eddie Copeland, who described the advantages of smart-city technology above, also urges that steps beyond a city-wide Q&A be undertaken to increase the awareness of what’s at stake and enlist the public’s engagement in the monitoring of these systems.  He argues that democratic methods or processes need to be established to determine whether AI-related approaches are likely to solve a specific problem a city faces; that the right people need to be assembled and involved in the decision-making regarding all smart-city systems; and that this group needs to develop and apply new skills, attitudes and mind-sets to ensure that these technologies maintain their citizen-oriented focus. 
 
As I argued last week, the initial ventilation process takes a long, hard time. Moreover, it is difficult (and maybe impossible) to conduct if negotiations with the technology vendor are on-going or that vendor is “on the clock.”
 
Democracy should have the space and time to be a proactive instead of reactive whenever transformational tech-driven opportunities are presented to the public.

(AP Photo/David Goldman)

2.         A Community’s Conversation Helps Norms to Evolve, One Citizen at a Time

I started this post with the observation that many (if not most) of us initially felt that it was acceptable to trade access to our personal data if the companies that wanted it were providing platforms that offered new kinds of enjoyment or convenience. Many still think it’s an acceptable trade. But over the past several years, as privacy advocates have become more vocal, leading jurisdictions have begun to enact data-privacy laws, and Facebook has been criticized for enabling Russian interference in the 2016 election and the genocide in Myanmar, how we view this trade-off has begun to change.  
 
In a chapter of his new book How Change Happens, legal scholar Cass Sunstein argues that these kinds of widely-seen developments:

can have a crucial and even transformative signaling effect, offering people information about what others think. If people hear the signal, norms may shift, because people are influenced by what they think other people think.

Sunstein describes what happens next as an “unleashing” process where people who never formed a full-blown preference on an issue like “personal data privacy (or were simply reluctant to express it because the trade-offs for “free” platforms seemed acceptable to everybody else), now become more comfortable giving voice to their original qualms. In support, he cites a remarkable study about how a norm that gave Saudi Arabian husbands decision-making power over their wives’ work-lives suddenly began to change when actual preferences became more widely known.

In that country, there remains a custom of “guardianship,” by which husbands are allowed to have the final word on whether their wives work outside the home. The overwhelming majority of young married men are privately in favor of female labor force participation. But those men are profoundly mistaken about the social norm; they think that other, similar men do not want women to join the labor force. When researchers randomly corrected those young men’s beliefs about what other young men believed, they became far more willing to let their wives work. The result was a significant impact on what women actually did. A full four months after the intervention, the wives of men in the experiment were more likely to have applied and interviewed for a job.

When more people either speak up about their preferences or are told that others’ inclinations are similar to theirs, the prevailing norm begins to change.
 
A robust, democratic process that debates the advantages and risks of AI-driven, smart city technologies will likely have the same change-inducing effect. The prevailing norm that finds it acceptable to exchange our behavioral data for “free” tech platforms will no longer be as acceptable as it once was. The more we ask the right questions about smart-city technologies and the longer we grapple as communities with the acceptable answers, the faster the prevailing norm governing personal data privacy will evolve.  
 
Our good work of citizens is to become more knowledgeable about the issues and to champion what is important to us in dialogue with the people who live and work along side of us. More grounds for protecting our personal information are coming out of the smart-cities debate and we are already deciding where new privacy lines should be drawn around us. 

This post was adapted from my July 7, 2019 newsletter. When you subscribe, a new newsletter/post will be delivered to your inbox every Sunday morning.

Filed Under: *All Posts, Being Part of Something Bigger than Yourself, Building Your Values into Your Work, Continuous Learning Tagged With: Ai, artificial intelligence, Cass Sunstein, dataveillance, democracy, how change happens, norms, personal data brokers, personal privacy, privacy, Quayside, Sidewalk Labs, smart cities, Smart City, surveillance capitalism, Toronto, values

Democracy Collides With Technology in Smart Cities

July 1, 2019 By David Griesing Leave a Comment

There is a difference between new technology we’ve already adopted without thinking it through and new technology that we still have the chance to tame before its harms start overwhelming its benefits.
 
Think about Google, Facebook, Apple and Amazon with their now essential products and services. We fell in love with their whiz-bang conveniences so quickly that their innovations become a part of our lives before we recognized their downsides.  Unfortunately, now that they’ve gotten us hooked, it’s also become our problem (or our struggling regulators’ problem) to manage the harms caused by their products and services. 
 
-For Facebook and Google, those disruptions include surveillance dominated business models that compromise our privacy (and maybe our autonomy) when it comes to our consumer, political and social choices.
 
-For Apple, it’s the impact of constant smart phone distraction on young people whose brain power and ability to focus are still developing, and on the rest of us who look at our phones more than our partners, children or dogs.
 
-For these companies (along with Amazon), it’s also been the elimination of competitors, jobs and job-related community benefits without their upholding the other leg of the social contract, which is to give back to the economy they are profiting from by creating new jobs and benefits that can help us sustain flourishing communities.
 
Since we’ll never relinquish the conveniences these tech companies have brought, we’ll be struggling to limit their associated damages for a very long time. But a distinction is important here. 
 
The problem is not with these innovations but in how we adopted them. Their amazing advantages overwhelmed our ability as consumers to step back and see everything that we were getting into before we got hooked. Put another way, the capitalist imperative to profit quickly from transformative products and services overwhelmed the small number of visionaries who were trying to imagine for the rest of us where all of the alligators were lurking.
 
That is not the case with the new smart city initiatives that cities around the world have begun to explore. 
 
Burned and chastened, there was a critical mass of caution (as well as outrage) when Google affiliate Sidewalk Labs proposed a smart-city initiative in Toronto. Active and informed guardians of the social contract are actively negotiating with a profit-driven company like Sidewalk Labs to ensure that its innovations will also serve their city’s long- and short-term needs while minimizing the foreseeable harms.
 
Technology is only as good as the people who are managing it.

For the smart cities of the future, that means engaging everybody who could be benefitted as well as everybody who could be harmed long before these innovations “go live.” A fundamentally different value proposition becomes possible when democracy has enough time to collide with the prospects of powerful, life-changing technologies.

Irene Williams used remnants from football jerseys and shoulder pads to portray her local environs in Strip Quilt, 1960-69

1.         Smart Cities are Rational, Efficient and Human

I took a couple of hours off from work this week to visit a small exhibition of new arrivals at the Philadelphia Museum of Art. 
 
To the extent that I’ve collected anything over the years, it has been African art and textiles, mostly because locals had been collecting these artifacts for years, interesting and affordable items would come up for sale from time to time, I learned about the traditions behind the wood carvings or bark cloth I was drawn to, and gradually got hooked on their radically different ways of seeing the world. 
 
Some of those perspectives—particularly regarding reduction of familiar, natural forms to abstracted ones—extended into the homespun arts of the American South, particularly in the Mississippi Delta. 
 
A dozen or so years ago, quilts from rural Alabama communities like Gee’s Bend captured the art world’s attention, and my local museum just acquired some of these quilts along with other representational arts that came out of the former slave traditions in the American South. The picture at the top (of Loretta Pettway’s Roman Stripes Variation Quilt) and the others pictures here are from that new collection.
 
One echo in these quilts to smart cities is how they represent “maps” of their Delta communities, including rooflines, pathways and garden plots as a bird that was flying over, or even God, might see them. There is rationality—often a grid—but also local variation, points of human origination that are integral to their composition. As a uniquely American art form, these works can be read to combine the essential elements of a small community in boldly stylized ways. 
 
In their economy and how they incorporate their creator’s lived experiences, I don’t think that it’s too much of a stretch to say that they capture the essence of community that’s also coming into focus in smart city planning.
 
Earlier this year, I wrote about Toronto’s smart city initiative in two posts. The first was Whose Values Will Drive Our Future?–the citizens who will be most affected by smart city technologies or the tech companies that provide them. The second was The Human Purpose Behind Smart Cities. Each applauded Toronto for using cutting edge approaches to reclaim its Quayside neighborhood while also identifying some of the concerns that city leaders and residents will have to bear in mind for a community supported roll-out. 
 
For example, Robert Kitchin flagged seven “dangers” that haunt smart city plans as they’re drawn up and implemented. They are the dangers of taking a one-size-fits-all-cities approach; assuming the initiative is objective and “scientific” instead of biased; believing that complex social problems can be reduced to technology hurdles; having smart city technologies replacing key government functions as “cost savings” or otherwise; creating brittle and hackable tech systems that become impossible to maintain; being victimized as citizens by pervasive “dataveillance”; and reinforcing existing power structures and inequalities instead of improving social conditions.
 
Google’s Sidewalk Labs (“Sidewalk”) came out with its Master Innovation and Development Plan (“Plan”) for Toronto’s Quayside neighborhood this week. Unfortunately, against a rising crescendo of outrage over tech company surveillance and data privacy over the past 9 months, Sidewalk did a poor job of staying in front of the public relations curve by regularly consulting the community on its intentions. The result has been rising skepticism among Toronto’s leaders and citizens about whether Sidewalk can be trusted to deliver what it promised.
 
Toronto’s smart cities initiative is managed by an umbrella entity called Waterfront Toronto that was created by the city’s municipal, provincial and national governments. Sidewalk also has a stake in that entity, which has a high-powered board and several advisory boards with community representatives.

Last October one of those board members, Ann Cavoukian, who had recently been Ontario’s information and privacy commissioner, resigned in protest because she came to believe that Sidewalk was reneging on its promise to render all personal data anonymous immediately after it was collected. She worried that Sidewalk’s data collection technologies might identify people’s faces or license plates and potentially be used for corporate profit, despite Sidewalk’s public assurance that it would never market citizen-specific data. Cavoukian felt that leaving anonymity enforcement to a new and vaguely described “data trust” that Sidewald intended to propose was unacceptable and that other“[c]itizens in the area don’t feel that they’ve been consulted appropriately” about how their privacy would be protected either.
 
This April, a civil liberties coalition sued the three Canadian governments that created Waterfront Toronto over privacy concerns which appeared premature because Sidewalk’s actual Plan had yet to be submitted. When Sidewalk finally did so this week, the governments’ senior representative at Waterfront Toronto publically argued that the Plan goes “beyond the scope of the project initially proposed” by, among other things, including significantly more City property than was originally intended and “demanding” that the City’s existing transit network be extended to Quayside. 
 
Data privacy and surveillance concerns also persisted. A story this week about the Plan announcement and government push-back also included criticism that Sidewalk “is coloring outside the lines” by proposing a governance structure like “the data trust” to moderate privacy issues instead of leaving that issue to Waterfront Toronto’s government stakeholders. While Sidewalk said it welcomed this kind of back and forth, there is no denying that Toronto’s smart city dreams have lost a great deal of luster since they were first floated.
 
How might things have been different?
 
While it’s a longer story for another day, some years ago I was project lead on importing liquefied natural gas into Philadelphia’s port, an initiative that promised to bring over $1 billion in new revenues to the city. Unfortunately, while we were finalizing our plans with builders and suppliers, concerns that the Liberty Bell would be taken out by gas explosions (and other community reactions) were inadequately “ventilated,” depriving the project of key political sponsorship and weakening its chances for success. Other factors ultimately doomed this LNG project, but consistently building support for a project that concerned the commmunity certainly contributed. Despite Sidewalk’s having a vaunted community consensus builder in Dan Doctoroff at its helm, Sidewalk (and Google) appear to be fumbling this same ball in Toronto today.
 
My experience, along with Doctoroff’s and others, go some distance towards proving why profit-oriented companies are singularly ill-suited to take the lead on transformative, community-impacting projects. Why?  Because it’s so difficut to justify financially the years of discussions and consensus building that are necessary before an implementation plan can even be drafted. Capitalism is efficient and “economical” but democracy, well, it’s far less so.
 
Argued another way, if I’d had the time and funding to build a city-wide consensus around how significant new LNG revenues would benefit Philadelphia’s residents before the financial deals for supply, construction and distribution were being struck, there could have been powerful civic support built for the project and the problems that ultimately ended it might never have materialized. 
 
This anecdotal evidence from Toronto and Philadelphia begs some serious questions: 
 
-Should any technology that promises to transform people’s lives in fundamental ways (like smart cities or smart phones) be “held in abeyance” from the marketplace until its impacts can be debated and necessary safeguards put in place?
 
-Might a mandated “quiet period“ (like that imposed by regulators in the months before public stock offerings) be better than leaving tech companies to bomb us with seductive products that make them richer but many of us poorer because we never had a chance to consider the fall-out from these products beforehand?
 
-Should the economic model that brings technological innovations with these kinds of impacts to market be fundamentally changed to accommodate advance opportunities for the rest of us to learn what the necessary questions are, ask them and consider the answers we receive?

Mama’s Song, Mary Lee Bendolph

3.         An Unintended but Better Way With Self-Driving Cars

I can’t answer these questions today, but surely they’re worth asking and returning to.
 
Instead, I’m recalling some of the data that is being accumulated today about self-driving/autonomous car technology so that the impacted communities will have made at least some of their moral and other preferences clear long before this transformative technology has been brought to market and seduced us into dependency upon it. As noted in a post from last November:

One way to help determine what the future should look like and how it should operate is to ask people—lots of them—what they’d like to see and what they’re concerned about…In the so-called Moral Machine Experiment, these researchers asked people around the world for their preferences regarding the moral choices that autonomous cars will be called upon to make so that this new technology can match human values as well as its developer’s profit motives.

For example, if a self-driving car has to choose between hitting one person in its way or another, should it be the 6-year old or the 60-year old? People in different parts of the world would make different choices and it takes sustained investments of time and effort to gather those viewpoints.

If peoples’ moral preferences can be taken into account beforehand, the public might be able to recognize “the human face” in a new technology from the beginning instead of having to attempt damage control once that technology is in use.

Public advocates, like those in Toronto who filed suit in April, and the other Cassandras identifying potential problems also deserve a hearing.  Every transformative project’s (or product’s or service’s) dissenters as well as its proponents need opportunities to persuade those who have yet to make up their minds about whether the project is good for them before it’s on the runway or already taken off. 

Following their commentary and grappling with their concerns removes some of the dazzle in our [initial] hopes and grounds them more firmly in reality early on.

Unlike the smart city technology that Sidewalk Labs already has for Toronto, it’s only recently become clear that the artificial intelligence systems behind autonomous vehicles are unable to make the kinds of decisions that “take into mind” a community’s moral preferences. In effect, the rush towards implementation of this disruptive technology was stalled by problems with the technology itself. But this kind of pause is the exception not the rule. The rush to market and its associated profits are powerful, making “breathers to become smarter” before product launches like this uncommon.
 
Once again, we need to consider whether such public ventilation periods should be imposed. 
 
Is there any better way to aim for the community balance between rationality and efficiency on the one hand, human variation and need on the other, that was captured by some visionary artists from the Mississippi delta?
 

+ + + 


Next week, I’m thinking about a follow-up post on smart cities that uses the “seven dangers” discussed above as a springboard for the necessary follow-up questions that Torontonians (along with the rest of us) should be asking and debating now as the tech companies aim to bring us smarter and better cities. In that regard, I’d be grateful for your thoughts on how innovation can advance when democracy gets involved.

This post was adapted from my June 30, 2019 newsletter. When you subscribe, a new newsletter/post will be delivered to your inbox every Sunday morning.

Filed Under: *All Posts, Being Part of Something Bigger than Yourself, Continuous Learning Tagged With: capitalism, community outreach, democracy, dissent, Gees Bend quilts, Google, innovation, Quayside, Sidewalk Labs, smart cities, technology, tension between capitalism and democracy, Toronto, transformative technology

In the Wake of Charlie Hebdo: What We Hold in Common

January 11, 2015 By David Griesing 5 Comments

From a certain perspective, could those men who were shot by French police yesterday be any more inspirational?

Masked, sheathed in black, executioners pointing their weapons at a pleading, fallen policeman outside Charlie Hebdo’s offices just before they fired their fatal shots. An image on the front page of every newspaper in the world that said “You’re watching, we’re doing.”

paris policeman 749 x 499

 

We just walked into the offices of some of your grown-ups, men and women who made their livings mocking our beliefs. We shouted to God while we mowed the clowns down in a hail of bullets. Who’s laughing now?

I walked into a kosher market, because it’s better when there’s some punishment for the Israeli oppressors too. I called the authorities and said, “You know who I am,” and oh, by the way, “If you take down my brothers, I’ll kill more of these Jews.” Yes, we talk to one another and work together. In fact, we’ll will be talking for years about we’ve accomplished today, and how little you could do about it.

“How just three brave men who believed in martyrdom could disrupt an arrogant nation and rivet the world’s attention” is our story. We kept our heads down long enough to escape surveillance by your overburdened security systems. There are just too many of us now for you to keep track of. And you will be reminded again that we are out here waiting. You will be reminded again very soon.

If you and your family feel unwelcomed by society in the West, or are unemployed, undervalued, feeling bored or disrespected or both just about anywhere else, this is a way to take your talent, redeem your life, find your inspiration. Yes. Jihadist recruiters had their second best week after 9/11 this week, while we mostly responded with… sentiment.

not afraid 876x493

 

If you and I are not afraid, surely it’s not because of our drones, or American advisors trying to mobilize frightened Iraqi troops, or even those women brigades of Kurdish Peshmerga warriors who are maybe the closest thing we have to our own “superheroes” in the battle against militancy.

But beyond our own adolescent yearnings for fast solutions and simple justice, there is surely fear along with the tug of something deeper that calls upon us to engage with this asymmetrical challenge more seriously–far more seriously than this week’s opportunity to set down some flowers and light some candles on blood-stained sidewalks. A pretty cheap response, when it comes down to it, because it costs us so little. In a clash of world-views, do we need any more reminding that three lone gunmen (and the legions behind them) are much more serious about the drift of the world than we are?

But still…in the coming weeks, we’ll be debating racial profiling (“I am Ahmed,” after all) and how no American college would allow its student newspaper to print politically incorrect cartoons like Charlie Hebdo’s.

Surely we’ll buy more guns (because after Sandy Hook, gun advocates said the tragic might never have happened if those first grade teachers had had their own guns), and just as surely someone will use theirs to shoot somebody who looks like the Enemy. Then, of course, we’ll have polarizing arguments about what it all means. But talk is cheap too. In the coming weeks, it will still be our sentiment and endless talk around those who want to annihilate the freedoms that give us the luxury of all this sentiment and talk.

We take our values for granted. We’re no longer even sure about the ones that we share. But Said and Cherif Kouachi and Amedy Coulibaly were not confused. Going forward, there will be plenty of people who want to provide for us a black & white moral clarity (Ms. Le Pen if you’re in France, fill in the blank if you’re in the U.S.). But wouldn’t it be better if we started re-learning for ourselves how to become clearer about the values that we’re committed to?

In a recent op-ed entitled “Democracy Requires a Patriotic Education,” former dean of Yale College Donald Kagan wrote the following about what he fears we are (and are not) being taught in our schools today.

We look to education to solve the pressing current problems of our economic and technological competition with other nations, but we must not neglect the inescapable political and ethical effects of education.

 

We in the academic community have too often engaged in miseducation. . .. If we encourage rampant individualism to trample on the need for a community and common citizenship, if we ignore civic education, the forging of a single people, the building of a legitimate patriotism, we will have selfish individuals, heedless of the needs of others, the war of all against all, the reluctance to work towards the common good and to defend our country when defense is needed. (emphasis added)

Maybe you cringed when you read the words “legitimate patriotism,” but Kagan is right.

We need to figure out how to stand together again, what we hold as precious in common and would be willing to champion together. They are the values that we would be willing to fight and even die for. Try to imagine what they are if you can. Try to imagine us coming together as citizens and finding the collective spirit to fight a war like World War II today (with all hands-on-deck, not just a few “volunteers”) and you can sense the gulf between our illusion of shared purpose and the reality.

We need to bridge this divide—moving from sentiment and debate to principles we share (whatever they are)—and do so quickly, before others jump in to do it for us when we’re even more afraid. After all, is there anyone who doubts that there is a gun pointed our way, and that it could be any of us there on the ground, pleading for life?

What is necessary is not cheap, but the alternatives, well we are starting to see the alternatives.

Things fall apart; the centre cannot hold;

Mere anarchy is loosed upon the world,

The blood-dimmed tide is loosed, and everywhere

The ceremony of innocence is drowned;

The best lack all conviction, while the worst

Are full of passionate intensity.

 

(William Butler Yeats, The Second Coming)

 

 

 

 

Filed Under: *All Posts, Being Part of Something Bigger than Yourself, Continuous Learning Tagged With: Charlie Hebdo, commitment, democracy, democratic values, Donald Kagan, in common, terrorism, values, values awareness

About David

David Griesing (@worklifeward) writes from Philadelphia.

Read More →

Subscribe to my Newsletter

Join all the others who have new posts, recommendations and links to explore delivered to their inboxes every week. Please subscribe below.

David Griesing Twitter @worklifereward

My Forthcoming Book

WordLifeReward Book

Search this Site

Recent Posts

  • Liberating Trump’s Good Instincts From the Rest April 21, 2025
  • Delivering the American Dream More Reliably March 30, 2025
  • A Place That Looks Death in the Face, and Keeps Living March 1, 2025
  • Too Many Boys & Men Failing to Launch February 19, 2025
  • We Can Do Better Than Survive the Next Four Years January 24, 2025

Follow Me

David Griesing Twitter @worklifereward

Copyright © 2025 David Griesing. All Rights Reserved.

  • Terms of Use
  • Privacy Policy