David Griesing | Work Life Reward Author | Philadelphia

  • Blog
  • About
    • Biography
    • Teaching and Training
  • Book
    • WorkLifeReward
  • Subscribe to my Newsletter
  • Contact
You are here: Home / Archives for privacy

The Giving Part of Taking Other People’s Pictures

June 14, 2021 By David Griesing Leave a Comment

It’s harder than ever to maintain, and then safeguard, our zones of privacy.
 
I’ve been thinking about it in terms of pictures that other people take of us or that we take of them—sometimes when those other people are friends, sometimes when they’re strangers, and sometimes when its companies or authorities who are taking them for their own purposes.
 
In these photographs, what is the line between a fair exchange (with mutual benefits) and an unwelcomed intrusion?
 
What exactly are we “taking” when we take a picture of somebody?
 
(When shown their photographs, tribal people often complain that the camera has somehow stolen their souls.)
 
Is there, or should there be, a “give” as well as a “take” with photography?
 
Two encounters this week sharpened that last question for me.
 
A close colleague of mine in counseling work stopped by unannounced with some cookies to end our just concluded school year on a celebratory note. We’d been meeting with our kids on Zoom and hadn’t seen one another in person for months. She was so glad to see me that she wanted to take my picture before leaving, but I waved her gesture off. I’d stopped mowing the lawn when I saw her heading my way and felt that my sweaty appearance would have made a poor souvenir (even though she clearly felt otherwise). “What just happened?” I wondered afterwards.
 
My second encounter came by way of reminiscence.
 
Three years ago this week, I had been in New Orleans and was remembering that unbelievably rich and flavorful time, eager to go back and dig in even deeper. Part of my return trip would be taking in a “second line” street parade, because every week of the year at least one of them takes place somewhere in the City.

A “second line” street parade photo by Aeisha Palmer, May 20, 2007

As you can imagine, these parades (which are sponsored by New Orlean’s “social aid and pleasure clubs”) are a kind of paradise for professional and amateur photographers.  While following a random NOLA thread last week, I came across a story about “the etiquette of making photos” of the performers at these parades. This story also speculated about the “taking and giving” boundaries of photographing other people. For example:
 
Are there different rules for friends than there are for strangers?
 
Several years ago, Susan Sontag explored these boundaries and expectations in a series of essays for the New York Review of Books, later published in her own book, On Photography. Sontag focused on the “acquisitive” nature of cameras, how they “take something” from whoever or whatever is being photographed, a sentiment that’s similar to those tribal member fears about having their essences stolen. She wrote:

To photograph is to appropriate the thing photographed.

Sontag also commented on the vicarious nature of picture taking. 

Photography has become one of the principal devices for experiencing something, [or at least] for giving the appearance of participation.

The way she saw it, we may not be marching in (or even watching) the parade, “but somehow we feel that we are” if we can capture a picture of it for savoring now and later on. Instead of “being in the moment,” we’re counting on the triggering nature of these pictures to approximate the real experience we’ve missed by “capturing enough of it” to still feel satisfied. 
 
Of course, there are consequences on both sides to this kind of “taking.” A drive to accumulate photographic experiences can not only rob us of more direct engagement with other people and places (say, the actual smells and sounds of the parade, or the conversations we might otherwise be having with spectators and participants), it also raises questions about the boundaries that can be crossed when we’re driven by a kind of hunger to “take” more and more of them without ever realizing the impacts that we’re having by doing so. To our camera’s subjects, it can feel like violation.
 
As I’ve become more thoughtful about these impacts, it’s meant thinking through my picture-taking drive in advance.
 
What is gained and what can be lost when I’m taking somebody’s picture? What is (or should be) the etiquette around photographing others? These are questions that seem impossible to ignore since cameras are literally everywhere today, devouring what they see through their lenses.  As a result, going through some Q&A with myself by way of preparation—whether I’m likely to be the photographer or the photographed—increasingly seems like a good idea. 
 
For instance, what if strangers “who would make me a great picture” are performing in public or, even more commonly, just being themselves in a public place when I happen upon them with my camera? 
 
My most indelible experience of the latter happened at the Damascus Gate, which leads to the “Arab Quarter” in Jerusalem’s Old City. In arcs along the honey-colored steps that sweep down to that massive archway, Palestinian women, many in traditional clothes, were gathering and talking in a highly animated fashion against the backdrop of ancient battlements, but as soon as I pointed my camera in their direction to take “my perfect shot,” they raised their hands, almost as one, and shielded their faces from me. Was that ever sobering! I didn’t know whether they were protecting their souls or simply their modesty and privacy from another invasive tourist.
 
In the story about picture taking at parades in New Orleans, one photographer who is drawn by their similarly incredible visuals observed:

You really have to be present and aware and know when the right time is to take a photo. Photography can be an extractive thing, exploitative, especially now when so many people have cameras. 

To her, knowing when to shoot and when to refrain from picture taking is about reading the situation, 

a vibe. You know when somebody wants you to take their photo, and you know when somebody doesn’t.

Another regular parade photographer elaborated on her comments:

If you carry yourself the right way . . . people putting on that parade see you know how to handle yourself and will give you a beautiful shot.

I’ve also found that performers want you to portray them in the best light and will help you “to light the scene” when you make eye contact and invite them to do so. On the other hand, they will also tell you (if you’re paying attention) when the lighting is off and you should just back off.

Here’s one where I got it right, at least about “working the scene together.” 

Because everybody wants to look their best while being photographed, the same rules usually apply when the subjects aren’t part of a performance but simply out in public, being interesting by being themselves. For the would-be photographer, it’s about initiating a conversation and establishing at least a brief connection before asking: can I take your picture? If they don’t feel “looked down upon” by your interest, they’ll often agree. But as with those “on stage,” these preliminaries can also result in: “No, I’d rather that you didn’t right now,” a phrase that’s hard to hear when “a great picture” is right there in front of you if only you could “take it.”
 
Whenever you know in advance that taking pictures could be uncomfortable for those being photographed, one New Orleans parade regular talked about the need to deepen his relationship with those he wants to photograph before showing up with his camera. Because he takes pictures at NOLA’s legendary funeral parades, he brings club members photos that he’s taken of the deceased on prior occasions so that colleagues and family “have a record of that person’s street style.” It’s his sign of respect at what is, after all, a time for grieving a loss as well as celebrating a life.

We go and we shoot funerals and [then] it’s not a voyeuristic thing. You’re doing what you do within the context of the community

—a community that you’ve already made yourself at least “an honorary member of” through your empathy and generosity. 
 
Then, what you’re giving tends to balance what you’ll be taking.

Here’s a gentleman I’d just purchased something from at the annual flea market.

So what about my cookie-bearing friend who showed up unannounced this week? 
 
Should I have relaxed “my best foot forward” enough to permit one sweaty shot when she so clearly wanted a memento of our reunion after so many months apart?  
 
Yes, probably. 
 
But I’ve become so defensive about cameras taking my picture on every city street, whenever I ring somebody’s doorbell or face my laptop screen that sometimes it’s hard to recognize when “putting down my guard” is actually relationship building and for my own good instead of some kind of robbery.
 
Where zones of personal privacy are concerned, this is a tricky time to navigate either taking pictures of somebody or being captured by one.
 
It’s one more reason to try and rehearse my camera-related transactions before I find myself, once again, in the middle of one. 
 

+ + + 

 
(If you’re interested in a photo essay I posted after my last visit to New Orleans, here it is, from May, 2018. Another post, with photos taken at the Mummers Parade in January, 2019, can be found here. Taking pictures has always been a way that I recharge for work, although I’m still in the process of learning its complicated rules.)

This post was adapted from my May 30, 2021 newsletter. Newsletters are delivered to subscribers’ in-boxes every Sunday morning and occasionally I post the content from one of them here. You can subscribe by leaving your email address in the column to the right.

Filed Under: *All Posts, Being Part of Something Bigger than Yourself, Being Proud of Your Work, Building Your Values into Your Work, Continuous Learning, Daily Preparation, Introducing Yourself & Your Work Tagged With: collaboration, etiquette, giving and taking, New Orleans, photography, privacy, reciprocity, rules of the road, Second Line Parades, Susan Sontag

Who’s Winning Our Tugs-of-War Over On-Line Privacy & Autonomy?

February 1, 2021 By David Griesing Leave a Comment

We know that our on-line privacy and autonomy (or freedom from outside control) are threatened in two, particularly alarming ways today. There are the undisclosed privacy invasions that occur from our on-line activities and the loss of opportunities where we can speak our minds without censorship.

These alarm bells ring because of the dominance of on-line social media platforms like Facebook, YouTube and Twitter and text-based exchanges like What’s App and the other instant messaging services—most of which barely existed a decade ago. With unprecedented speed, they’ve become the town squares of modern life where we meet, talk, shop, learn, voice opinions and engage politically. But as ubiquitous and essential as they’ve become, their costs to vital zones of personal privacy and autonomy have caused a significant backlash, and this past week we got an important preview of where this backlash is likely to take us.

Privacy advocates worry about the harmful consequences when personal data is extracted from users of these platforms and services. They say our own data is being used “against us” to influence what we buy (the targeted ads that we see and don’t see), manipulate our politics (increasing our emotional engagement by showing us increasingly polarizing content), and exert control over our social behavior (by enabling data-gathering agencies like the police, FBI or NSA). Privacy advocates are also offended that third parties are monetizing personal data “that belongs to us” in ways that we never agreed to, amounting to a kind of theft of our personal property by unauthorized strangers.

For their part, censorship opponents decry content monitors who can bar particular statements or even participation on dominant platforms altogether for arbitrary and biased reasons. When deprived of the full use of our most powerful channels of mass communication, they argue that their right to peaceably assemble is being eviscerated by what they experience as “a culture war” against them. 

Both groups say they have a privacy right to be left alone and act autonomously on-line: to make choices and decisions for themselves without undue influence from outsiders; to be free from ceaseless monitoring, profiling and surveillance; to be able to speak their minds without the threat of “silencing;” and, “to gather” for any lawful purpose without harassment. 

So how are these tugs-or-war over two of our most basic rights going?

This past week provided some important indications.

This week’s contest over on-line privacy pit tech giant Apple against rivals with business models that depend upon selling their users’ data to advertisers and other third parties—most prominently, Facebook and Google.

Apple announced this week that it would immediately start offering its leading smartphone users additional privacy protections. One relates to its dominant App Store and developers like Facebook, Google and the thousands of other companies that sell their apps (or platform interfaces) to iPhone users.

Going forward—on what Apple chief Tim Cook calls “a privacy nutrition label”—every app that the company offers for installation on its phones will need to share its data collection and privacy practices before purchase in ways that Apple will ensure “every user can understand and act on.” Instead of reading (and then ignoring) multiple pages of legalese, for the first time every new Twitter or YouTube user for example, will be able through their iPhones to either “opt-in” or refuse an app’s data collection practices after reading plain language that describes the personal data that will be collected and what will be done with it. In a similar vein, iPhone users will gain a second advantage over apps that have already been installed on their phones. With new App Tracking Transparency, iPhone users will be able to control how each app is gathering and sharing their personal data. For every application on your iPhone, you can now choose whether a Facebook or Google has access to your personal data or not.

While teeing up these new privacy initiatives at an industry conference this week, Apple chief Tim Cook was sharply critical of companies that take our personal data for profit, citing several of the real world consequences when they do so. I quote at length from his remarks last Thursday because I enjoyed hearing someone of Cook’s stature speaking to these issues so pointedly, and thought you might too:

A little more than two years ago…I spoke in Brussels about the emergence of a data-industrial complex… At that gathering we asked ourselves: “what kind of world do we want to live in?” Two years later, we should now take a hard look at how we’ve answered that question. 

The fact is that an interconnected ecosystem of companies and data brokers, of purveyors of fake news and peddlers of division, of trackers and hucksters just looking to make a quick buck, is more present in our lives than it has ever been. 

And it has never been so clear how it degrades our fundamental right to privacy first, and our social fabric by consequence.

As I’ve said before, ‘if we accept as normal and unavoidable that everything in our lives can be aggregated and sold, then we lose so much more than data. We lose the freedom to be human.’….

Together, we must send a universal, humanistic response to those who claim a right to users’ private information about what should not and will not be tolerated….

At Apple…, [w]e have worked to not only deepen our own core privacy principles, but to create ripples of positive change across the industry as a whole. 

We’ve spoken out, time and again, for strong encryption without backdoors, recognizing that security is the foundation of privacy. 

We’ve set new industry standards for data minimization, user control and on-device processing for everything from location data to your contacts and photos. 

At the same time that we’ve led the way in features that keep you healthy and well, we’ve made sure that technologies like a blood-oxygen sensor and an ECG come with peace of mind that your health data stays yours.

And, last but not least, we are deploying powerful, new requirements to advance user privacy throughout the App Store ecosystem…. 

Technology does not need vast troves of personal data, stitched together across dozens of websites and apps, in order to succeed. Advertising existed and thrived for decades without it. And we’re here today because the path of least resistance is rarely the path of wisdom. 

If a business is built on misleading users, on data exploitation, on choices that are no choices at all, then it does not deserve our praise. It deserves reform….

At a moment of rampant disinformation and conspiracy theories juiced by algorithms, we can no longer turn a blind eye to a theory of technology that says all engagement is good engagement — the longer the better — and all with the goal of collecting as much data as possible.

Too many are still asking the question, “how much can we get away with?,” when they need to be asking, “what are the consequences?” What are the consequences of prioritizing conspiracy theories and violent incitement simply because of their high rates of engagement? What are the consequences of not just tolerating, but rewarding content that undermines public trust in life-saving vaccinations? What are the consequences of seeing thousands of users join extremist groups, and then perpetuating an algorithm that recommends even more?….

[N]o one needs to trade away the rights of their users to deliver a great product. 

With its new “data nutrition labels” and “app tracking transparency,” many (if not most) of Apple’s iPhone users are likely to reject other companies’ data collection and sharing practices once they understand the magnitude of what’s being taken from them. Moreover, these votes for greater data privacy could be a major financial blow to the companies extracting our data because Apple sold more smartphones globally than any other vendor in the last quarter of 2020, almost half of Americans use iPhones (45.3% of the market according to one analyst), more people access social media and messaging platforms from their phones than from other devices, and the personal data pipelines these data extracting companies rely upon could start constricting immediately.   
 
In this tug-of-war between competing business models, the outcry this week was particularly fierce from Facebook, which one analyst predicts could start to take “a 7% revenue hit” (that’s real cash at $6 billion) as early as the second quarter of this year. (Facebook’s revenue take in 2020 was $86 billion, much of it from ad sales fueled by user data.) Mark Zuckerberg charged that Apple’s move tracks its competitive interests, saying its rival “has every incentive to use their dominant platform position to interfere with how our apps and other apps work,” among other things, a dig at on-going antitrust investigations involving Apple’s App Store. In a rare expression of solidarity with the little guy, Zuckerberg also argued that small businesses which access customers through Facebook would suffer disproportionately from Apple’s move because of their reliance on targeted advertising. 
 
There’s no question that Apple was flaunting its righteousness on data privacy this week and that Facebook’s “ouches” were the most audible reactions. But there is also no question that a business model fueled by the extraction of personal data has finally been challenged by another dominant market player. In coming weeks and months we’ll find out how interested Apple users are about protecting their privacy on their iPhones and whether their eagerness prompts other tech companies to offer similar safeguards. We’ll get signals from how advertising dollars are being spent as the “underlying profile data” becomes more limited and less reliable. We may also begin to see the gradual evolution of an on-line public space that’s somewhat more respectful of our personal privacy and autonomy.
 
What’s clearer today is that tech users concerned about the privacy of their data and freedom from data-driven manipulation on-line can now limit at least some of the flow of that information to unwelcome strangers in ways that they never had at their disposal before.

All of us should be worried about censorship of our views by content moderators at private companies (whether in journalism or social media) and by governmental authorities that wish to stifle dissenting opinions.  But many of the strongest voices behind regulating the tech giants’ penchant “to moderate content” today come from those who are convinced that press, media and social networking channels both limit access to and censor content from those who differ with “their liberal or progressive points of view.” Their opposition speaks not only to the extraordinary dominance of these tech giants in the public square today but also to the air of grievance that colors the political debates that we’ve been having there.
 
Particularly after President Trump’s removal from Facebook and Twitter earlier this month and the temporary shutdown of social media upstart Parler after Amazon cut off its cloud computing services, there has been a concerted drive to find new ways for individuals and groups to communicate with one another on-line in ways that cannot be censored or “de-platformed” altogether. Like the tug-of-war over personal data privacy, a new polarity over on-line censorship and the ways to get around it could fundamentally alter the character of our on-line public squares.
 
Instead of birthing a gaggle of new “Right-leaning” social media companies with managers who might still be tempted to interfere with irritating content, blockchain software technology is now being utilized to create what amount to “moderation-proof” communication networks.
 
To help with basic blockchain mechanics, this is how I described it here in 2018.

A blockchain is a web-based chain of connections, most often with no central monitor, regulator or editor. Its software applications enable every node in its web of connections to record data which can then be seen and reviewed by every other connection. It maintains its accuracy through this transparency. Everyone with access can see what every other connection has recorded in what amounts to a digital ledger…

Blockchain-based software can be launched by individuals, organizations or even governments. Software access can be limited to a closed network of participants or open to everyone. A blockchain is usually established to overcome the need for and cost of a “middleman” (like a bank) or some other impediment (like currency regulations, tariffs or burdensome bureaucracy). It promotes “the freer flow” of legal as well as illegal goods, services and information. Blockchain is already driving both modernization and globalization. Over the next several years, it will also have profound impacts on us as individuals. 

If you’d gain from a visual description, this short video from The MIT Technology Review will also show you the basics about this software innovation.  
 
I’ve written several times before about the promise of blockchain-driven systems. For example, Your Work is About to Change Forever (about a bit-coin-type financial future without banks or traditional currencies); Innovation Driving Values (how secure and transparent recording of property rights like land deeds can drive economic progress in the developing world); Blockchain Goes to Work (how this software can enable gig economy workers to monetize their work time in a global marketplace); Data Privacy & Accuracy During the Coronavirus (how a widely accessible global ledger that records accurate virus-related information can reduce misinformation); and, with some interesting echoes today, a 2017 post called Wish Fulfillment (about why a small social media platform called Steem-It was built on blockchain software).    
 
Last Tuesday, the New York Times ran an article titled: They Found a Way to Limit Big Tech’s Power: Using the Design of Bitcoin. That “Design” in the title was blockchain software. The piece highlighted:

a growing movement by technologists, investors and everyday users to replace some of the internet’s basic building blocks in ways that would be harder for tech giants like Facebook or Google [or, indeed, anyone outside of these self-contained platforms] to control.

Among other things, the article described how those “old” internet building blocks would be replaced by blockchain-driven software, enabling social media platforms that would be the successors to the one that Steem-It built several years ago. However, while Steem-It wanted to provide a safe and reliable way to pay contributors for their social media content, in this instance the over-riding drive is “to make it much harder for any government or company to ban accounts or delete content.” 

It’s both an intoxicating and a chilling possibility.

While the Times reporter hinted about the risks with ominous quotes and references to the creation of “a decentratlized web of hate,” it’s worth noting that nothing like it has materialized, yet. Also implied but never discussed was the urgency that many feel to avoid censorship of their minority viewpoints by people like Twitter’s Jack Dorsey or even the New York Times editors who effectively decide what to report on and what to ignore. So what’s the bottom line in this tech-enabled tug-of-war between political forces?

The public square that we occupy daily—for communication and commerce, family connection and dissent—a public square that the dominant social media platforms largely provide, cannot (and must not) be governed by @Jack, the sensibilities of mainstream media, or any group of esteemed private citizens like Facebook’s recently appointed Oversight Board. One of the most essential roles of government is to maintain safety and order in, and to set forth the rules of the road for, our public square. Because blockchain-enabled social networks will likely be claiming more of that public space in the near future—even as they strive to evade its common obligations through encryption and otherwise—government can and should enforce the rules for this brave new world.

Until now, our government has failed to confront either on-line censorship or its foreseeable consequences. Because our on-line public square has become (in a few short years) as essential to our way of life as our electricity or water, its social media and similar platforms should be licensed and regulated like those basic services, that is, like utilities—not only for our physical safety but also for the sake of our democratic institutions, which survived their most recent tests but may not survive their next ones if we fail to govern ourselves and our awesome technologies more responsibly.

In this second tug-of-war, we don’t have a moment to lose.

This post was adapted from my January 31, 2021 newsletter. Newsletters are delivered to subscribers’ in-boxes every Sunday morning. You can sign up by leaving your email address in the column to the right.

Filed Under: *All Posts, Being Part of Something Bigger than Yourself Tagged With: app tracking transparency, Apple, autonomy, blockchain, censorship, commons, content monitoring, facebook, freedom of on-line assembly, human tech, privacy, privacy controls, privacy nutrition label, public square, social media platforms

Citizens Will Decide What’s Important in Smart Cities

July 8, 2019 By David Griesing Leave a Comment

The norms that dictate the acceptable use of artificial intelligence in technology are in flux. That’s partly because the AI-enabled, personal data gathering by companies like Google, Facebook and Amazon has caused a spirited debate about the right of privacy that individuals have over their personal information. With your “behavioral” data, the tech giants can target you with specific products, influence your political views, manipulate you into spending more time on their platforms, and weaken the control that you have over your own decision-making.
 
In most of the debate about the harms of these platforms thus far, our privacy rights have been poorly understood.  In fact, our anything-but-clear commitments to the integrity of our personal information have enabled these tech giants to overwhelm our initial, instinctive caution as they seduced us into believing that “free” searches, social networks or next day deliveries might be worth giving them our personal data in return. Moreover, what alternatives did we have to the exchange they were offering?

  • Where were the privacy-protecting search engines, social networks and on-line shopping hubs?
  • Moreover, once we got hooked on to these data-sucking platforms, wasn’t it already too late to “put the ketchup back in the bottle” where our private information was concerned? Don’t these companies (and the data brokers that enrich them) already have everything that they need to know about us?

Overwhelmed by the draw of  “free” services from these tech giants, we never bothered to define the scope of the privacy rights that we relinquished when we accepted their “terms of service.”  Now, several years into this brave new world of surveillance and manipulation, many feel that it’s already too late to do anything, and even if it weren’t, we are hardly willing to relinquish the advantages of these platforms when they are unavailable elsewhere. 
 
So is there really “no way out”?  
 
A rising crescendo of voices is gradually finding a way, and they are coming at it from several different directions.
 
In places like Toronto (London, Helsinki, Chicago and Barcelona) policy makers and citizens alike are defining the norms around personal data privacy at the same time that they’re grappling with the potential fallout of similar data-tracking, analyzing and decision-making technologies in smart-city initiatives.
 
Our first stop today is to eavesdrop on how these cities are grappling with both the advantages and harms of smart-city technologies, and how we’re all learning—from the host of scenarios they’re considering—why it makes sense to shield our personal data from those who seek to profit from it.  The rising debate around smart-city initiatives is giving us new perspectives on how surveillance-based technologies are likely to impact our daily lives and work. As the risks to our privacy are played out in new, easy-to-imagine contexts, more of us will become more willing to protect our personal information from those who could turn it against us in the future.
 
How and why norms change (and even explode) during civic conversations like this is a topic that Cass Sunstein explores in his new book How Change Happens. Sunstein considers the personal impacts when norms involving issues like data privacy are in flux, and the role that understanding other people’s priorities always seems to play. Some of his conclusions are also discussed below. As “dataveillance” is increasingly challenged and we contextualize our privacy interests even further, the smart-city debate is likely to usher in a more durable norm regarding data privacy while, at the same time, allowing us to realize the benefits of AI-driven technologies that can improve urban efficiency, convenience and quality of life.
 
With the growing certainty that our personal privacy rights are worth protecting, it is perhaps no coincidence that there are new companies on the horizon that promise to provide access to the on-line services we’ve come to expect without our having to pay an unacceptable price for them.  Next week, I’ll be sharing perhaps the most promising of these new business models with you as we begin to imagine a future that safeguards instead of exploits our personal information. 

1.         Smart-City Debates Are Telling Us Why Our Personal Data Needs Protecting

Over the past 6 months, I’ve talked repeatedly about smart-city technologies and one of you reached out to me this week wondering:  “What (exactly) are these new “technologies”?”  (Thanks for your question, George!).  
 
As a general matter, smart-city technologies gather and analyze information about how a city functions, while improving urban decision-making around that new information. Throughout, these data-gathering,  analyzing, and decision-making processes rely on artificial intelligence. In his recent article “What Would It Take to Help Cities Innovate Responsibly With AI?” Eddie Copeland begins by describing the many useful things that AI enables us to do in this context: 

AI can codify [a] best practice and roll it out at scale, remove human bias, enable evidence-based decision making in the field, spot patterns that humans can’t see, optimise systems too complex for humans to model, quickly digest and interpret vast quantities of data and automate demanding cognitive activities.

In other words, in a broad range of urban contexts, a smart-city system with AI capabilities can make progressively better decisions about nearly every aspect of a city’s operations by gaining an increasingly refined understanding of how its citizens use the city and are, in turn, served by its managers.
 
Of course, the potential benefits of greater or more equitable access to city services as well as their optimized delivery are enormous. Despite some of the current hew and cry, a smart-cities future does not have to resemble Big Brother. Instead, it could liberate time and money that’s currently being wasted, permitting their reinvestment into areas that produce a wider variety of benefits to citizens at every level of government.
 
Over the past weeks and months, I’ve been extolling the optimism that drove Toronto to launch its smart-cities initiative called Quayside and how its debate has entered a stormy patch more recently. Amidst the finger pointing among Google affiliate Sidewalk Labs, government leaders and civil rights advocates, Sidewalk (which is providing the AI-driven tech interface) has consistently stated that no citizen-specific data it collects will be sold, but the devil (as they say) remains in the as-yet to be disclosed details. This is from a statement the company issued in April:

Sidewalk Labs is strongly committed to the protection and privacy of urban data. In fact, we’ve been clear in our belief that decisions about the collection and use of urban data should be up to an independent data trust, which we are proposing for the Quayside project. This organization would be run by an independent third party in partnership with the government and ensure urban data is only used in ways that benefit the community, protect privacy, and spur innovation and investment. This independent body would have full oversight over Quayside. Sidewalk Labs fully supports a robust and healthy discussion regarding privacy, data ownership, and governance. But this debate must be rooted in fact, not fiction and fear-mongering.

As a result of experiences like Toronto’s (and many others, where a new technology is introduced to unsuspecting users), I argued in last week’s post for longer “public ventilation periods” to understand the risks as well as rewards before potentially transformative products are launched and actually used by the public.
 
In the meantime, other cities have also been engaging their citizens in just this kind of information-sharing and debate. Last week, a piece in the New York Times elaborated on citizen-oriented initiatives in Chicago and Barcelona after noting that:

[t]he way to create cities that everyone can traverse without fear of surveillance and exploitation is to democratize the development and control of smart city technology.

While Chicago was developing a project to install hundreds of sensors throughout the city to track air quality, traffic and temperature, it also held public meetings and released policy drafts to promote a City-wide discussion on how to protect personal privacy. According to the Times, this exchange shaped policies that reduced, among other things, the amount of footage that monitoring cameras retained. For its part, Barcelona has modified its municipal procurement contracts with smart cities technology vendors to announce its intentions up front about the public’s ownership and control of personal data.
 
Earlier this year, London and Helsinki announced a collaboration that would enable them to share “best practices and expertise” as they develop their own smart-city systems. A statement by one driver of this collaboration, Smart London, provides the rationale for a robust public exchange:

The successful application of AI in cities relies on the confidence of the citizens it serves.
 
Decisions made by city governments will often be weightier than those in the consumer sphere, and the consequences of those decisions will often have a deep impact on citizens’ lives.
 
Fundamentally, cities operate under a democratic mandate, so the use of technology in public services should operate under the same principles of accountability, transparency and citizens’ rights and safety — just as in other work we do.

To create “an ethical framework for public servants and [a] line-of-sight for the city leaders,” Smart London proposed that citizens, subject matter experts, and civic leaders should all ask and vigorously debate the answers to the following 10 questions:

  • Objective– why is the AI needed and what outcomes is it intended to enable?
  • Use– in what processes and circumstances is the AI appropriate to be used?
  • Impacts– what impacts, good and bad, could the use of AI have on people?
  • Assumptions– what assumptions is the AI based on, and what are their iterations and potential biases?
  •  Data– what data is/was the AI trained on and what are their iterations and potential biases?
  • Inputs– what new data does the AI use when making decisions?
  • Mitigation– what actions have been taken to regulate the negative impacts that could result from the AI’s limitations and potential biases?
  • Ethics– what assessment has been made of the ethics of using this AI? In other words, does the AI serve important, citizen-driven needs as we currently understand those priorities?
  • Oversight– what human judgment is needed before acting on the AI’s output and who is responsible for ensuring its proper use?
  • Evaluation– how and by what criteria will the effectiveness of the AI in this smart-city system be assessed and by whom?

As stakeholders debate these questions and answers, smart-city technologies with broad-based support will be implemented while citizens gain a greater appreciation of the privacy boundaries they are protecting.
 
Eddie Copeland, who described the advantages of smart-city technology above, also urges that steps beyond a city-wide Q&A be undertaken to increase the awareness of what’s at stake and enlist the public’s engagement in the monitoring of these systems.  He argues that democratic methods or processes need to be established to determine whether AI-related approaches are likely to solve a specific problem a city faces; that the right people need to be assembled and involved in the decision-making regarding all smart-city systems; and that this group needs to develop and apply new skills, attitudes and mind-sets to ensure that these technologies maintain their citizen-oriented focus. 
 
As I argued last week, the initial ventilation process takes a long, hard time. Moreover, it is difficult (and maybe impossible) to conduct if negotiations with the technology vendor are on-going or that vendor is “on the clock.”
 
Democracy should have the space and time to be a proactive instead of reactive whenever transformational tech-driven opportunities are presented to the public.

(AP Photo/David Goldman)

2.         A Community’s Conversation Helps Norms to Evolve, One Citizen at a Time

I started this post with the observation that many (if not most) of us initially felt that it was acceptable to trade access to our personal data if the companies that wanted it were providing platforms that offered new kinds of enjoyment or convenience. Many still think it’s an acceptable trade. But over the past several years, as privacy advocates have become more vocal, leading jurisdictions have begun to enact data-privacy laws, and Facebook has been criticized for enabling Russian interference in the 2016 election and the genocide in Myanmar, how we view this trade-off has begun to change.  
 
In a chapter of his new book How Change Happens, legal scholar Cass Sunstein argues that these kinds of widely-seen developments:

can have a crucial and even transformative signaling effect, offering people information about what others think. If people hear the signal, norms may shift, because people are influenced by what they think other people think.

Sunstein describes what happens next as an “unleashing” process where people who never formed a full-blown preference on an issue like “personal data privacy (or were simply reluctant to express it because the trade-offs for “free” platforms seemed acceptable to everybody else), now become more comfortable giving voice to their original qualms. In support, he cites a remarkable study about how a norm that gave Saudi Arabian husbands decision-making power over their wives’ work-lives suddenly began to change when actual preferences became more widely known.

In that country, there remains a custom of “guardianship,” by which husbands are allowed to have the final word on whether their wives work outside the home. The overwhelming majority of young married men are privately in favor of female labor force participation. But those men are profoundly mistaken about the social norm; they think that other, similar men do not want women to join the labor force. When researchers randomly corrected those young men’s beliefs about what other young men believed, they became far more willing to let their wives work. The result was a significant impact on what women actually did. A full four months after the intervention, the wives of men in the experiment were more likely to have applied and interviewed for a job.

When more people either speak up about their preferences or are told that others’ inclinations are similar to theirs, the prevailing norm begins to change.
 
A robust, democratic process that debates the advantages and risks of AI-driven, smart city technologies will likely have the same change-inducing effect. The prevailing norm that finds it acceptable to exchange our behavioral data for “free” tech platforms will no longer be as acceptable as it once was. The more we ask the right questions about smart-city technologies and the longer we grapple as communities with the acceptable answers, the faster the prevailing norm governing personal data privacy will evolve.  
 
Our good work of citizens is to become more knowledgeable about the issues and to champion what is important to us in dialogue with the people who live and work along side of us. More grounds for protecting our personal information are coming out of the smart-cities debate and we are already deciding where new privacy lines should be drawn around us. 

This post was adapted from my July 7, 2019 newsletter. When you subscribe, a new newsletter/post will be delivered to your inbox every Sunday morning.

Filed Under: *All Posts, Being Part of Something Bigger than Yourself, Building Your Values into Your Work, Continuous Learning Tagged With: Ai, artificial intelligence, Cass Sunstein, dataveillance, democracy, how change happens, norms, personal data brokers, personal privacy, privacy, Quayside, Sidewalk Labs, smart cities, Smart City, surveillance capitalism, Toronto, values

Looking Out For the Human Side of Technology

October 28, 2018 By David Griesing Leave a Comment

Maintaining human priorities in the face of new technologies always feels like “a rearguard action.” You struggle to prevent something bad from happening even when it seems like it may be too late.

The promise of the next tool or system intoxicates us. Smart phones, social networks, gene splicing.  It’s the super-computer at our fingertips, the comfort of a boundless circle of friends, the ability to process massive amounts of data quickly or to short-cut labor intensive tasks, the opportunity to correct genetic mutations and cure disease. We’ve already accepted these promises before we pause to consider their costs—so it always feels like we’re catching up and may not have done so in time.

When you’re dazzled by possibility and the sun is in your eyes, who’s thinking “maybe I should build a fence?”

The future that’s been promised by tech giants like Facebook is not “the win-win” that we thought it was. Their primary objectives are to serve their financial interests—those of their founder-owners and other shareholders—by offering efficiency benefits like convenience and low cost to the rest of us. But as we’ve belattedly learned, they’ve taken no responsibility for the harms they’ve also caused along the way, including exploitation of our personal information, the proliferation of fake news and jeopardy to democratic processes, as I argued here last week.

Technologies that are not associated with particular companies also run with their own promise until someone gets around to checking them–a technology like artificial intelligence or AI for example. From an ethical perspective, we are usually playing catch up ball with them too. If there’s a buck to be made or a world to transform, the discipline to ask “but should we?” always seems like getting in the way of progress.

Because our lives and work are increasingly impacted, the stories this week throw additional light on the technology juggernaut that threatens to overwhem us and our “rearguard” attempts to tame it with our human concerns.

To gain a fuller appreciation of the problem regarding Facebook, a two-part Frontline doumentary will be broadcasting this week that is devoted to what one reviewer calls “the amorality” of the company’s relentless focus on adding users and compounding ad revenues while claiming to create the on-line “community” that all of us should want in the future.  (The show airs tomorrow, October 29 at 9 p.m. and on Tuesday, October 30 at 10 p.m. EST on PBS.)

Frontline’s reporting covers Russian election interference, Facebook’s role in whipping Myanmar’s Buddhists into a frenzy over its Rohingya minority, Russian interference in past and current election cycles, and how strongmen like Rodrigo Duterte in the Phillipines have been manipulating the site to achieve their political objectives. Facebook CEO Mark Zuckerberg’s limitations as a leader are explored from a number of directions, but none as compelling as his off-screen impact on the five Facebook executives who were “given” to James Jacoby (the documentary’s director, writer and producer) to answer his questions. For the reviewer:

That they come off like deer in Mr. Jacoby’s headlights is revealing in itself. Their answers are mealy-mouthed at best, and the defensive posture they assume, and their evident fear, indicates a company unable to cope with, or confront, the corruption that has accompanied its absolute power in the social median marketplace.

You can judge for yourself. You can also ponder whether this is like holding a gun manufacturer liable when one of its guns is used to kill somebody.  I’ll be watching “The Facebook Dilemma” for what it has to say about a technology whose benefits have obscured its harms in the public mind for longer than it probably should have. But then I remember that Facebook barely existed ten years ago. The most important lesson from these Frontline episodes may be how quickly we need to get the stars out of our eyes after meeting these powerful new technologies if we are to have any hope of avoiding their most significant fallout.

Proceed With Caution

I was also struck this week by Apple CEO Tim Cook’s explosive testimony at a privacy conference organized by the European Union.

Not only was Cook bolstering his own company’s reputation for protecting Apple users’ personal information, he was also taking aim at competitors like Google and Facebook for implementing a far more harmful business plan, namely, selling user information to advertisers, reaping billions in ad dollar revenues in exchange, and claiming the bargain is providing their search engine or social network to users for “free.” This is some of what Cook had to say to European regulators this week:

Our own information—from the everyday to the deeply personal—is being weaponized against us with military efficiency. Today, that trade has exploded into a data-industrial complex.

These scraps of data, each one harmless enough on its own, are carefully assembled, synthesized, traded, and sold. This is surveillance. And these stockpiles of personal data serve only to enrich the companies that collect them. This should make us very uncomfortable.

Technology is and must always be rooted in the faith people have in it. We also recognize not everyone sees it that way—in a way, the desire to put profits over privacy is nothing new.

“Weaponized” technology delivered with “military efficiency.” “A data-industrial complex.” One of the benefits of competition is that rivals call you out, while directing unwanted attention away from themselves. One of my problems with tech giant Amazon, for example, is that it lacks a neck-to-neck rival to police its business practices, so Cook’s (and Apple’s) motives here have more than a dollop of competitive self-interest where Google and Facebook are concerned. On the other hand, Apple is properly credited with limiting the data it makes available to third parties and rendering the data it does provide anonymous. There is a bit more to the story, however.

If data privacy were as paramount to Apple as it sounded this week, it would be impossible to reconcile Apple’s receiving more than $5 billion a year from Google to make it the default search engine on all Apple devices. However complicit in today’s tech bargains, Apple pushed its rivals pretty hard this week to modify their business models and become less cynical about their use of our personal data as the focus on regulatory oversight moves from Europe to the U.S.

Keeping Humans in the Tech Equation

Technologies that aren’t proprietary to a particular company but are instead used across industries require getting over additional hurdles to ensure that they are meeting human needs and avoiding technology-specific harms for users and the rest of us. This week, I was reading up on a positive development regarding artificial intelligence (AI) that only came about because serious concerns were raised about the transparency of AI’s inner workings.

AI’s ability to solve problems (from processing big data sets to automating steps in a manufacturing process or tailoring a social program for a particular market) is only as good as the algorithms it uses. Given concern about personal identity markers such as race, gender and sexual preference, you may already know that an early criticism of artificial intelligence was that an author of an algorithm could be unwittingly building her own biases into it, leading to discriminatory and other anti-social results.  As a result, various countermeasures are being undertaken to minimize grounding these kinds of biases in AI code. With that in mind, I read a story this week about another systemic issue with AI processing’s “explainability.”

It’s the so-called “black box” problem. If users of systems that depend on AI don’t know how they work, they won’t trust them. Unfortunately, one of the prime advantages of AI is that it solves problems that are not easily understood by users, which presents the quandary that AI-based systems might need to be “dumbed-down” so that the humans using them can understand and then trust them. Of course, no one is happy with that result.

A recent article in Forbes describes the trust problem that users of machine-learning systems experience (“interacting with something we don’t understand can cause anxiety and make us feel like we’re losing control”) along with some of the experts who have been feeling that anxiety (cancer specialists who agreed with a “Watson for Oncology” system when it confirmed their judgments but thought it was wrong when it failed to do so because they couldn’t understand how it worked).

In a positive development, a U.S. Department of Defense agency called DARPA (or Defense Advanced Research Projects Agency) is grappling with the explainability problem. Says David Gunning, a DARPA program manager:

New machine-learning systems will have the ability to explain their rationale, characterize their strengths and weaknesses, and convey an understanding of how they will behave in the future.

In other words, these systems will get better at explaining themselves to their users, thereby overcoming at least some of the trust issue.

DARPA is investing $2 billion in what it calls “third-wave AI systems…where machines understand the context and environment in which they operate, and over time build underlying explanatory models that allow them to characterize real word phenomena,” according to Gunning. At least with the future of warfare at stake, a problem like “trust” in the human interface appears to have stimulated a solution. At some point, all machine-learning systems will likely be explaining themselves to the humans who are trying to keep up with them.

Moving beyond AI, I’d argue that there is often as much “at stake” as sucessfully waging war when a specific technology is turned into a consumer product that we use in our workplaces and homes.

While there is heightened awareness today about the problems that Facebook poses, few were raising these concerns even a year ago despite their toxic effects. With other consumer-oriented technologies, there are a range of potential harms where little public dissent is being voiced despite serious warnings from within and around the tech industry. For example:

– how much is our time spent on social networks—in particular, how these networks reinforce or discourage certain of our behaviors—literally changing who we are?  
 
– since our kids may be spending more time with their smart phones than with their peers or family members, how is their personal development impacted, and what can we do to put this rabbit even partially back in the hat now that smart phone use seems to be a part of every child’s right of passage into adulthood? 
 
– will privacy and surveillance concerns become more prevalent when we’re even more surrounded than we are now by “the internet of things” and as our cars continue to morph into monitoring devices—or will there be more of an outcry for reasonable safeguards beforehand? 
 
– what are employers learning about us from our use of technology (theirs as well as ours) in the workplace and how are they using this information?

The technologies that we use demand that we understand their harms as well as their benefits. I’d argue our need to become more proactive about voicing our concerns and using the tools at our disposal (including the political process) to insist that company profit and consumer convenience are not the only measures of a technology’s impact.

Since invention of the printing press a half-millennia ago, it’s always been hard but necessary to catch up with technology and to try and tame its excesses as quickly as we can.

This post was adapted from my October 28, 2018 newsletter.

Filed Under: *All Posts, Building Your Values into Your Work, Continuous Learning Tagged With: Amazon, Apple, ethics, explainability, facebook, Google, practical ethics, privacy, social network harms, tech, technology, technology safeguards, the data industrial complex, workplace ethics

About David

David Griesing (@worklifeward) writes from Philadelphia.

Read More →

David Griesing Twitter @worklifereward

Subscribe to my Newsletter

Join all the others who have new posts, recommendations and links to explore delivered to their inboxes every week. You can read all published newsletters via the Index on the Subscribe Page.

My Forthcoming Book

WordLifeReward Book

Writings

  • *All Posts (215)
  • Being Part of Something Bigger than Yourself (106)
  • Being Proud of Your Work (33)
  • Building Your Values into Your Work (83)
  • Continuous Learning (74)
  • Daily Preparation (52)
  • Entrepreneurship (30)
  • Heroes & Other Role Models (40)
  • Introducing Yourself & Your Work (23)
  • The Op-eds (4)
  • Using Humor Effectively (14)
  • Work & Life Rewards (72)

Archives

Search this Site

Follow Me

David Griesing Twitter @worklifereward

Recent Posts

  • An Artist Needs to Write Us a Better Story About the Future March 9, 2023
  • Patagonia’s Rock Climber February 19, 2023
  • We May Be In a Neurological Mismatch with Our Tech-Driven World January 29, 2023
  • Reading Last Year and This Year January 12, 2023
  • A Time for Repair, for Wintering  December 13, 2022

Navigate

  • About
    • Biography
    • Teaching and Training
  • Blog
  • Book
    • WorkLifeReward
  • Contact
  • Privacy Policy
  • Subscribe to my Newsletter
  • Terms of Use

Copyright © 2023 David Griesing. All Rights Reserved.

  • Terms of Use
  • Privacy Policy