David Griesing | Work Life Reward Author | Philadelphia

  • Blog
  • About
    • Biography
    • Teaching and Training
  • Book
    • WorkLifeReward
  • Subscribe to my Newsletter
  • Contact
You are here: Home / Archives for Google

An Unhappy Healthcare System

November 19, 2019 By David Griesing Leave a Comment

It came as no surprise. 

After writing about the growing challenges and responsibilities facing medical professionals and patients last week, I happened upon two posts about the burnout rates for the professionals who are charged with promoting health and healing in the rest of us.

The first was a PBS Newshour segment about the extent of the problem and, more recently, some possible cures. It cited studies from recent years that doctors commit suicide at twice the rate of the general population, that 86 percent of the nurses at one hospital met the criteria for burnout syndrome, that 22 percent had symptoms for post-traumatic stress disorder, and that the PTSD numbers for critical care nurses were comparable to those of war veterans returning from Afghanistan and Iraq. The reporter described what is happening as “a public health crisis.”  In a small ray of hope, providers have also begun to create outlets—like arts programs—so healthcare workers can “process some of the trauma” they are experiencing on a daily basis and begin to recover.

The second post, in Fast Company, discussed the most stressful jobs that are being done by women today, including the category “nurses, psychiatric healthcare provides and home health aides.” It noted that registered nurses are 14-16% more likely to have poor cardiovascular health than the rest of the workforce, “a surprising result” because the job is so physically active and nurses are more knowledgeable about the risk factors for cardiovascular disease than the workforce in general.

Several of you who work in the health care industry wrote to me this week about your experiences at work, which (sadly) mirror these discouraging reports.

The other follow-up development relates to the data that is being gathered from patients by the health care industry. Earlier this week, the Wall Street Journal reported hat Google had struck “a secret deal” with Ascension, one of the nation’s largest hospital systems, to gather and analyze patient data including lab results, doctor diagnoses and hospital records. Called Project Nightingale by Google and “Project Nightmare” by others, the data extraction and analysis “amounts to a complete health history, including patient names and dates of birth.” Having all of our medical information instantly available for analysis in one place is clearly a game changer.

The first alarm bells sounded about Project Nightingale involved the privacy of patient data. (Indeed, the day after its initial report, the Journal reported that the government had launched an investigation into Google’s medical data gathering on the basis of these concerns.) Among the privacy-related questions: will access to a patient’s data be restricted to practitioners who are involved in improving that patient’s outcomes? If this data can be used by others, how will it be used and how is the hospital system ensuring that those uses are consistent with that provider’s privacy policies? The governing statute, the Health Insurance Portability and Accountability Act of 1996, provides only the loosest of restrictions today. (Hospitals can share data with business partners without telling patients as long as the information is used “only to help the covered entity carry out its health care functions.”) 

On the positive side, the aggregation of patient data can facilitate more accurate diagnoses and more effective patient treatment.

Google in this case is using the data in part to design new software, underpinned by advanced artificial intelligence and machine learning that zeroes in on individual patients to suggest changes to their care.

More troubling, given Medicine’s continued drift from “profession” to “business,” is how providers can realize more profits from their patients by prescribing more medications, tests and procedures. How can patients distinguish between what they truly need to promote their healing and what is profit-making by the health care provider? As the Journal story also reports:

Ascension, the second-largest health system in the U.S., aims in part to improve patient care. It also hopes to mine data to identify additional tests that could be necessary or other ways in which the system could generate more revenue from patients, documents show.

How will patients be protected from unnecessary interventions and expense, or, unlike today, be enabled by industry reporting about medical outcomes to protect themselves? As I argued last week, the ethical responsibilities for everyone in healthcare–including for patients–are shifting in real time.
 
Earlier this year, I posted (here, here and here) on a similar Google initiative regarding smart cities. In places like Toronto, the company is helping government to gather and “crunch” data that will allow their cities to operate “smarter” and provide greater benefits for their citizens from the efficiencies that are achieved. As with Project Nightingale, there are privacy concerns that Google is attempting to address. But there are also key differences between this tech giant’s plans for monetizing citizen data in smart cities and its plans for monetizing patient data in the medical system.
 
In healthcare, your most personal information is being taken and used. This data is far more vital to your personal integrity and survival than information about your local traffic patterns or energy usage.

Moreover, in smart cities there are governments and long-established regulatory bodies that can channel citizen concerns back to government and its tech consultants, like Google. Because these interfaces are largely absent in health care, monitoring and enforcement is up to individual patients or hospital-sponsored patients’ rights committees. In other words, if you (as a patient) aren’t “watching the store,” almost no one will be doing so on your behalf.
 
To this sort of concern, Google responds both early and often, “Trust us. We’ve got your interests at heart,” but there are many reasons to be skeptical.  Another Fast Company article that was posted yesterday documented (with a series of links) some of Google’s recent history mishandling user data.

Google has gotten in trouble with European lawmakers for failing to disclose how it collects data and U.S. regulators for sucking up information on children and then advertising to them. The company has exposed the data of some 52 million users thanks to a bug in its Google+ API, a platform that has been shutdown. Even in the field of health, it has already made missteps. In 2017, the U.K.’s Information Commissioner’s Office found the way patient data was shared between the Royal Free Hospital of London and [Google affiliate] DeepMind for a health project to be unlawful. The app involved…has since controversially been moved under the Google Health umbrella. More recently, a lawsuit accused Google, the University of Chicago Medical Center, and the University of Chicago of gross misconduct in handling patient records.

Much of moving forward here depends on trust.

Will health care providers, that suddenly have the profit-making potential of big data, protect us as patients or see us only as revenue generators?

Can these providers “master the learning curve” quickly enough to prevent sophisticated consultants like Google from exploiting us, or will the fox effectively be running the chicken coop going forward?

What will Google and the other data-gatherers do to recover trust that seems to be damaged almost daily wherever their revenues depend upon selling our data to advertisers and others who want to influence us?

Is Google’s business model simply incompatible with the business model that is evolving today in health care?

As the future of medicine gets debated, we all have a say in the matter.

This post was adapted from my November 17, 2019 newsletter. When you subscribe, a new newsletter/post will be delivered to your inbox every Sunday morning.

Filed Under: *All Posts, Being Part of Something Bigger than Yourself, Continuous Learning Tagged With: Ascension Health System, big data, Google, health care industry, medical professional burnout, nurse burnout, patient due dilligence, patient privacy rights, Project Nightingale, provider profit motives, PTSD in medical profession, unnecessary medical treatments

Democracy Collides With Technology in Smart Cities

July 1, 2019 By David Griesing Leave a Comment

There is a difference between new technology we’ve already adopted without thinking it through and new technology that we still have the chance to tame before its harms start overwhelming its benefits.
 
Think about Google, Facebook, Apple and Amazon with their now essential products and services. We fell in love with their whiz-bang conveniences so quickly that their innovations become a part of our lives before we recognized their downsides.  Unfortunately, now that they’ve gotten us hooked, it’s also become our problem (or our struggling regulators’ problem) to manage the harms caused by their products and services. 
 
-For Facebook and Google, those disruptions include surveillance dominated business models that compromise our privacy (and maybe our autonomy) when it comes to our consumer, political and social choices.
 
-For Apple, it’s the impact of constant smart phone distraction on young people whose brain power and ability to focus are still developing, and on the rest of us who look at our phones more than our partners, children or dogs.
 
-For these companies (along with Amazon), it’s also been the elimination of competitors, jobs and job-related community benefits without their upholding the other leg of the social contract, which is to give back to the economy they are profiting from by creating new jobs and benefits that can help us sustain flourishing communities.
 
Since we’ll never relinquish the conveniences these tech companies have brought, we’ll be struggling to limit their associated damages for a very long time. But a distinction is important here. 
 
The problem is not with these innovations but in how we adopted them. Their amazing advantages overwhelmed our ability as consumers to step back and see everything that we were getting into before we got hooked. Put another way, the capitalist imperative to profit quickly from transformative products and services overwhelmed the small number of visionaries who were trying to imagine for the rest of us where all of the alligators were lurking.
 
That is not the case with the new smart city initiatives that cities around the world have begun to explore. 
 
Burned and chastened, there was a critical mass of caution (as well as outrage) when Google affiliate Sidewalk Labs proposed a smart-city initiative in Toronto. Active and informed guardians of the social contract are actively negotiating with a profit-driven company like Sidewalk Labs to ensure that its innovations will also serve their city’s long- and short-term needs while minimizing the foreseeable harms.
 
Technology is only as good as the people who are managing it.

For the smart cities of the future, that means engaging everybody who could be benefitted as well as everybody who could be harmed long before these innovations “go live.” A fundamentally different value proposition becomes possible when democracy has enough time to collide with the prospects of powerful, life-changing technologies.

Irene Williams used remnants from football jerseys and shoulder pads to portray her local environs in Strip Quilt, 1960-69

1.         Smart Cities are Rational, Efficient and Human

I took a couple of hours off from work this week to visit a small exhibition of new arrivals at the Philadelphia Museum of Art. 
 
To the extent that I’ve collected anything over the years, it has been African art and textiles, mostly because locals had been collecting these artifacts for years, interesting and affordable items would come up for sale from time to time, I learned about the traditions behind the wood carvings or bark cloth I was drawn to, and gradually got hooked on their radically different ways of seeing the world. 
 
Some of those perspectives—particularly regarding reduction of familiar, natural forms to abstracted ones—extended into the homespun arts of the American South, particularly in the Mississippi Delta. 
 
A dozen or so years ago, quilts from rural Alabama communities like Gee’s Bend captured the art world’s attention, and my local museum just acquired some of these quilts along with other representational arts that came out of the former slave traditions in the American South. The picture at the top (of Loretta Pettway’s Roman Stripes Variation Quilt) and the others pictures here are from that new collection.
 
One echo in these quilts to smart cities is how they represent “maps” of their Delta communities, including rooflines, pathways and garden plots as a bird that was flying over, or even God, might see them. There is rationality—often a grid—but also local variation, points of human origination that are integral to their composition. As a uniquely American art form, these works can be read to combine the essential elements of a small community in boldly stylized ways. 
 
In their economy and how they incorporate their creator’s lived experiences, I don’t think that it’s too much of a stretch to say that they capture the essence of community that’s also coming into focus in smart city planning.
 
Earlier this year, I wrote about Toronto’s smart city initiative in two posts. The first was Whose Values Will Drive Our Future?–the citizens who will be most affected by smart city technologies or the tech companies that provide them. The second was The Human Purpose Behind Smart Cities. Each applauded Toronto for using cutting edge approaches to reclaim its Quayside neighborhood while also identifying some of the concerns that city leaders and residents will have to bear in mind for a community supported roll-out. 
 
For example, Robert Kitchin flagged seven “dangers” that haunt smart city plans as they’re drawn up and implemented. They are the dangers of taking a one-size-fits-all-cities approach; assuming the initiative is objective and “scientific” instead of biased; believing that complex social problems can be reduced to technology hurdles; having smart city technologies replacing key government functions as “cost savings” or otherwise; creating brittle and hackable tech systems that become impossible to maintain; being victimized as citizens by pervasive “dataveillance”; and reinforcing existing power structures and inequalities instead of improving social conditions.
 
Google’s Sidewalk Labs (“Sidewalk”) came out with its Master Innovation and Development Plan (“Plan”) for Toronto’s Quayside neighborhood this week. Unfortunately, against a rising crescendo of outrage over tech company surveillance and data privacy over the past 9 months, Sidewalk did a poor job of staying in front of the public relations curve by regularly consulting the community on its intentions. The result has been rising skepticism among Toronto’s leaders and citizens about whether Sidewalk can be trusted to deliver what it promised.
 
Toronto’s smart cities initiative is managed by an umbrella entity called Waterfront Toronto that was created by the city’s municipal, provincial and national governments. Sidewalk also has a stake in that entity, which has a high-powered board and several advisory boards with community representatives.

Last October one of those board members, Ann Cavoukian, who had recently been Ontario’s information and privacy commissioner, resigned in protest because she came to believe that Sidewalk was reneging on its promise to render all personal data anonymous immediately after it was collected. She worried that Sidewalk’s data collection technologies might identify people’s faces or license plates and potentially be used for corporate profit, despite Sidewalk’s public assurance that it would never market citizen-specific data. Cavoukian felt that leaving anonymity enforcement to a new and vaguely described “data trust” that Sidewald intended to propose was unacceptable and that other“[c]itizens in the area don’t feel that they’ve been consulted appropriately” about how their privacy would be protected either.
 
This April, a civil liberties coalition sued the three Canadian governments that created Waterfront Toronto over privacy concerns which appeared premature because Sidewalk’s actual Plan had yet to be submitted. When Sidewalk finally did so this week, the governments’ senior representative at Waterfront Toronto publically argued that the Plan goes “beyond the scope of the project initially proposed” by, among other things, including significantly more City property than was originally intended and “demanding” that the City’s existing transit network be extended to Quayside. 
 
Data privacy and surveillance concerns also persisted. A story this week about the Plan announcement and government push-back also included criticism that Sidewalk “is coloring outside the lines” by proposing a governance structure like “the data trust” to moderate privacy issues instead of leaving that issue to Waterfront Toronto’s government stakeholders. While Sidewalk said it welcomed this kind of back and forth, there is no denying that Toronto’s smart city dreams have lost a great deal of luster since they were first floated.
 
How might things have been different?
 
While it’s a longer story for another day, some years ago I was project lead on importing liquefied natural gas into Philadelphia’s port, an initiative that promised to bring over $1 billion in new revenues to the city. Unfortunately, while we were finalizing our plans with builders and suppliers, concerns that the Liberty Bell would be taken out by gas explosions (and other community reactions) were inadequately “ventilated,” depriving the project of key political sponsorship and weakening its chances for success. Other factors ultimately doomed this LNG project, but consistently building support for a project that concerned the commmunity certainly contributed. Despite Sidewalk’s having a vaunted community consensus builder in Dan Doctoroff at its helm, Sidewalk (and Google) appear to be fumbling this same ball in Toronto today.
 
My experience, along with Doctoroff’s and others, go some distance towards proving why profit-oriented companies are singularly ill-suited to take the lead on transformative, community-impacting projects. Why?  Because it’s so difficut to justify financially the years of discussions and consensus building that are necessary before an implementation plan can even be drafted. Capitalism is efficient and “economical” but democracy, well, it’s far less so.
 
Argued another way, if I’d had the time and funding to build a city-wide consensus around how significant new LNG revenues would benefit Philadelphia’s residents before the financial deals for supply, construction and distribution were being struck, there could have been powerful civic support built for the project and the problems that ultimately ended it might never have materialized. 
 
This anecdotal evidence from Toronto and Philadelphia begs some serious questions: 
 
-Should any technology that promises to transform people’s lives in fundamental ways (like smart cities or smart phones) be “held in abeyance” from the marketplace until its impacts can be debated and necessary safeguards put in place?
 
-Might a mandated “quiet period“ (like that imposed by regulators in the months before public stock offerings) be better than leaving tech companies to bomb us with seductive products that make them richer but many of us poorer because we never had a chance to consider the fall-out from these products beforehand?
 
-Should the economic model that brings technological innovations with these kinds of impacts to market be fundamentally changed to accommodate advance opportunities for the rest of us to learn what the necessary questions are, ask them and consider the answers we receive?

Mama’s Song, Mary Lee Bendolph

3.         An Unintended but Better Way With Self-Driving Cars

I can’t answer these questions today, but surely they’re worth asking and returning to.
 
Instead, I’m recalling some of the data that is being accumulated today about self-driving/autonomous car technology so that the impacted communities will have made at least some of their moral and other preferences clear long before this transformative technology has been brought to market and seduced us into dependency upon it. As noted in a post from last November:

One way to help determine what the future should look like and how it should operate is to ask people—lots of them—what they’d like to see and what they’re concerned about…In the so-called Moral Machine Experiment, these researchers asked people around the world for their preferences regarding the moral choices that autonomous cars will be called upon to make so that this new technology can match human values as well as its developer’s profit motives.

For example, if a self-driving car has to choose between hitting one person in its way or another, should it be the 6-year old or the 60-year old? People in different parts of the world would make different choices and it takes sustained investments of time and effort to gather those viewpoints.

If peoples’ moral preferences can be taken into account beforehand, the public might be able to recognize “the human face” in a new technology from the beginning instead of having to attempt damage control once that technology is in use.

Public advocates, like those in Toronto who filed suit in April, and the other Cassandras identifying potential problems also deserve a hearing.  Every transformative project’s (or product’s or service’s) dissenters as well as its proponents need opportunities to persuade those who have yet to make up their minds about whether the project is good for them before it’s on the runway or already taken off. 

Following their commentary and grappling with their concerns removes some of the dazzle in our [initial] hopes and grounds them more firmly in reality early on.

Unlike the smart city technology that Sidewalk Labs already has for Toronto, it’s only recently become clear that the artificial intelligence systems behind autonomous vehicles are unable to make the kinds of decisions that “take into mind” a community’s moral preferences. In effect, the rush towards implementation of this disruptive technology was stalled by problems with the technology itself. But this kind of pause is the exception not the rule. The rush to market and its associated profits are powerful, making “breathers to become smarter” before product launches like this uncommon.
 
Once again, we need to consider whether such public ventilation periods should be imposed. 
 
Is there any better way to aim for the community balance between rationality and efficiency on the one hand, human variation and need on the other, that was captured by some visionary artists from the Mississippi delta?
 

+ + + 


Next week, I’m thinking about a follow-up post on smart cities that uses the “seven dangers” discussed above as a springboard for the necessary follow-up questions that Torontonians (along with the rest of us) should be asking and debating now as the tech companies aim to bring us smarter and better cities. In that regard, I’d be grateful for your thoughts on how innovation can advance when democracy gets involved.

This post was adapted from my June 30, 2019 newsletter. When you subscribe, a new newsletter/post will be delivered to your inbox every Sunday morning.

Filed Under: *All Posts, Being Part of Something Bigger than Yourself, Continuous Learning Tagged With: capitalism, community outreach, democracy, dissent, Gees Bend quilts, Google, innovation, Quayside, Sidewalk Labs, smart cities, technology, tension between capitalism and democracy, Toronto, transformative technology

The Human Purpose Behind Smart Cities

March 24, 2019 By David Griesing Leave a Comment

It is human priorities that should be driving Smart City initiatives, like the ones in Toronto profiled here last week. 

Last week’s post also focused on a pioneering spirit in Toronto that many American cities and towns seem to have lost. While we entrench in the moral righteousness of our sides in the debate—including, for many, a distrust of collective governance, regulation and taxation—we drift towards an uncertain future instead of claiming one that can be built on values we actually share. 

In its King Street and Quayside initiatives, Toronto is actively experimenting with the future it wants based on its residents’ commitment to sustaining their natural environment in the face of urban life’s often toxic impacts.  They’re conducting these experiments in a relatively civil, collaborative and productive way—an urban role model for places that seem to have forgotten how to work together. Toronto’s bold experiments are also utilizing “smart” technologies in their on-going attempts to “optimize” living and working in new, experimental communities.

During a short trip this week, I got to see the leading edges of New York City’s new Hudson Yards community (spread over 28 acres with an estimated $25 billion price tag) and couldn’t help being struck by how much it catered to those seeking more luxury living, shopping and workspaces than Manhattan already affords. In other words, how much it could have been a bold experiment about new ways that all of its citizens might live and work in America’s first city for the next half-century, but how little it actually was. A hundred years ago, one of the largest immigrant migrations in history made New York City the envy of the world. With half of its current citizens being foreign-born, perhaps the next century, unfurling today, belongs to newer cities like Toronto.

Still, even with its laudable ambition, it will not be easy for Toronto and other future-facing communities to get their Smart City initiatives right, as several of you were also quick to remind me last week. Here is a complaint from a King Street merchant that one of you (thanks Josh!) found and forwarded that seems to cast what is happening in Toronto in a less favorable light than I had focused upon it:

What a wonderful story. But as with [all of] these wonderful plans some seem to be forgotten. As it appears are the actual merchants. Google certainly a big winner here. Below an excerpt written by one of the merchants:
   
‘The City of Toronto has chosen the worst time, in the worst way, in the worst season to implement the pilot project. Their goal is clearly to move people through King St., not to King St. For years King St. was a destination, now it is a thoroughfare.
 
‘The goal of the King St. Pilot project was said to be to balance three important principles: to move people more effectively on transit, to support business and economic prosperity and to improve public space. In its current form, the competing principles seem to be decidedly tilted away from the economic well-being of merchants and biases efficiency over convenience. The casual stickiness of pedestrians walking and stopping at stores, restaurants and other merchants is lost.
 
‘Additionally, the [transit authority] TTC has eliminated a number of stops along King St., forcing passengers to walk further to enter and disembark streetcars, further reducing pedestrian traffic and affecting areas businesses. The TTC appears to believe that if they didn’t have to pick up and drop off people, they could run their system more effectively.
 
‘The dubious benefits of faster street car traffic on King St. notwithstanding, the collateral damage of the increased traffic of the more than 20,000 cars the TTC alleges are displaced from King St to adjoining streets has turned Adelaide, Queen, Wellington and Front Sts. into a gridlock standstill. Anyone who has tried to navigate the area can attest that much of the time, no matter how close you are you can’t get there from here.
 
‘Along with the other merchants of King St. and the Toronto Entertainment District we ask that Mayor Tory and Toronto council to consider a simple, reasonable and cost-effective alternative. Put lights on King St. that restrict vehicle traffic during rush hours, but return King St. to its former vibrant self after 7 p.m., on weekends and statutory holidays. It’s smart, fair, reasonable and helps meet the goals of the King St. pilot project. 

Two things about this complaint seemed noteworthy. The first is how civil and constructive this criticism is in a process that hopes to “iterate” as real time impacts are assessed. It’s a tribute that Toronto’s experiments not only invite but are also receiving feedback like this. Alas, the second take-away from Josh’s comment is far more nettlesome. “[However many losers there may be along the way:] Google certainly a big winner here.”

The tech giant’s partnership with Canada’s governments in Toronto raises a constellation of challenging issues, but it’s useful to recall that pioneers who dare to claim new frontiers always do so with the best technology that’s available. While the settling of the American West involved significant collateral damage (to Native Americans and Chinese migrants, to the buffalo and the land itself), it would not have been possible without existing innovations and new ones that these pioneers fashioned along the way. Think of the railroads, the telegraph poles, even something as low-tech as the barbed wire that was used to contain livestock. 

The problem isn’t human and corporate greed or heartless technology—we know about them already—but failing to recognize and reduce their harmful impacts before it is too late. The objective for pioneers on new frontiers should always be maximizing the benefits while minimizing the harms that can be foreseen from the very beginning instead of looking back with anger after the damage is done.

We have that opportunity with Smart City initiatives today.

Because they concentrate many of the choices that will have to be made when we boldly dare to claim the future of America again, I’ve been looking for a roadmap through the moral thicket in the books and articles that are being written about these initiatives today. Here are some of the markers that I’ve discovered.

Human priorities, realized with the help of technology

1.         Markers on the Road to Smarter and More Vibrant Communities

The following insights come almost entirely from a short article by Robert Kitchin, a professor at Maynooth University in Ireland. In my review of the on-going conversation about Smart Cities, I found him to be one of its most helpful observers.  

In his article, Kitchin discusses the three principal ways that smart cities are understood, the key promises smart initiatives make to stakeholders, and the perils to be avoided around these promises.

Perhaps not surprisingly, people envision cities and other communities “getting smarter” in different ways. One constituency sees an opportunity to improve both “urban regulation and governance through instrumentation and data-driven systems”–essentially, a management tool. A bolder and more transformative vision sees information and communication technology “re-configur[ing] human capital, creativity, innovation, education, sustainability, and management,” thereby “produc[ing] smarter citizens, workers and public servants” who “can enact polic[ies], produce better products… foster indigenous entrepreneurship and attract inward investment.” The first makes the frontier operate more efficiently while the second improves nearly every corner of it.

The third Smart City vision is “a counter-weight or alternative” to each of them. It wants these technologies “to promote a citizen-centric model of development that fosters social innovation and social justice, civic engagement and hactivism, and transparent and accountable governance.” In this model, technology serves social objectives like greater equality and fairness. Kitchin reminds us that these three visions are not mutually exclusive. It seems to me that the priorities embedded in a community’s vision of a “smarter” future could include elements of each of them, functioning like checks and balances, in tension with one another. 

Smart City initiatives promise to solve pressing urban problems, including poor economic performance; government dysfunction; constrained mobility; environmental degradation; a declining quality of life, including risks to safety and security; and a disengaged, unproductive citizen base. Writes Kitchin:

the smart city promises to solve a fundamental conundrum of cities – how to reduce costs and create economic growth and resilience at the same time as producing sustainability and improving services, participation and quality of life – and to do so in commonsensical, pragmatic, neutral and apolitical ways.

Once again, it’s a delicate balancing act with a range of countervailing interests and constituencies, as you can see in the chart from a related discussion above.
 
The perils of Smart Cities should never overwhelm their promise in my view, but urban pioneers should always have them in mind (from planning through implementation) because some perils only manifest themselves over time. According to Kitchin, the seven dangers in pursuing these initiatives include:
 
–taking “a ‘one size fits all’ approach, treating cities as generic markets and solutions [that are] straightforwardly scalable and movable”;
 
–assuming that initiatives are “objective and non-ideological, grounded in either science or commonsense.” You can aim for these ideals, but human and organizational preferences and biases will always be embedded within them.
 
–believing that the complex social problems in communities can be reduced to “neatly defined technical problems” that smart technology can also solve. The ways that citizens have always framed and resolved their community problems cannot be automated so easily. (This is also the thrust of Ben Green’s Smart Enough City: Putting Technology in Its Place to Reclaim Our Urban Future, which will be published by MIT Press in April. In it he argues for “smart enough alternatives” that are attainable with the help of technology but never reducible to technology solutions alone.)
 
–engaging with corporations that are using smart city technologies “to capture government functions as new market opportunities.” One risk of a company like Google to communities like Toronto’s is that Google might lock Toronto in to its proprietary technologies and vendors over a long period of time or use Toronto’s citizen data to gain business opportunities in other cities.
 
–becoming straddled with “buggy, brittle and hackable” systems that are ever more “complicated, interconnected and dependent on software” while becoming more resistant to manual fixes.
 
–becoming victimized by “pervasive dataveillance that erodes privacy” through practices like “algorithmic social sorting (whether people get a loan, a tenancy, a job, etc), dynamic pricing (whereby different people pay varying prices depending on their perceived customer value) and anticipatory governance using predictive profiling (wherein data precedes how a person is policed and governed).” Earlier this month, my post on popular on-line games like Fortnite highlighted the additional risk that invasive technologies can use the data they are gathering to change peoples’ behavior.
 
-and lastly, reinforcing existing power structures and inequalities instead of eroding or reconfiguring them.
 
While acknowledging the promise of Smart Cities at their best, Kitchin closes his article with this cautionary note:

the realities of implementation are messier and more complex than the marketing hype of corporations or city managers portray and there are a number of social, political, ethical and legal concerns with respect to the kind of society smart city initiatives seek to create.  As such, whilst networked urbanism has benefits, it also poses challenges and risks that are often little explored or legislated for ahead of implementation. Indeed, the pace of development and rollout of smart city technologies is proceeding well ahead of wider reflection, critique and regulation.

Putting the cart before a suitably-designed horse is a problem with all new and seductive technologies that get embraced before their harms are identified or can be addressed—a quandary that was also considered here in a post called “Looking Out for the Human Side of Technology.”

2.         The Value of Our Data

A few additional considerations about the Smart City are also worth bearing in mind as debate about these initiatives intensifies.

In a March 8, 2019 post, Kurtis McBride wrote about two different ways “to value” the data that these initiatives will produce, and his distinction is an important one. It’s a discussion that citizens, government officials and tech companies should be having, but unfortunately are not having as much as they need to.

When Smart City data is free to everyone, there is the risk that the multinationals generating it will merely use it to increase their power and profits in the growing market for Smart City technologies and services. From the residents’ perspective, McBride argues that it’s “reasonable for citizens to expect to see benefit” from their data, while noting that these same citizens will also be paying dearly for smart upgrades to their communities. His proposal on valuing citizen data depends on how it will be used by tech companies like Google or local service providers. For example, if citizen data is used:

to map the safest and fastest routes for cyclists across the city and offers that information free to all citizens, [the tech company] is providing citizen benefit and should be able to access the needed smart city data free of charge. 
 
But, if a courier company uses real-time traffic data to optimize their routes, improving their productivity and profit margins – there is no broad citizen benefit. In those cases, I think it’s fair to ask those organizations to pay to access the needed city data, providing a revenue stream cities can then use to improve city services for all. 

Applying McBride’s reasoning, an impartial body in a city like Toronto would need to decide whether Google has to pay for data generated in its Quayside community by consulting a benefit-to-citizens standard. Clearly, if Google wanted to use Quayside data in a Smart City initiative in say Colorado or California, it would need to pay Toronto for the use of its citizens’ information.
 
Of course, addressing the imbalance between those (like us) who provide the data and the tech companies that use it to increase their profits and influence is not just a problem for Smart City initiatives, and changing the “value proposition” around our data is surely part of the solution. In her new book Age of Surveillance Capitalism: the Fight for a Human Future in the New Frontier of Power, Harvard Business School’s Shoshana Zuboff says that “you’re the product if these companies aren’t paying you for your data” does not state the case powerfully enough. She argues that the big tech platforms are like elephant poachers and our personal data like those elephants’ ivory tusks. “You are not the product,” she writes. “You are the abandoned carcass.”
 
Smart City initiatives also provide a way to think about “the value of our data” in the context of our living and working and not merely as the gateway to more convenient shopping, more addictive gaming experiences or  “free” search engines like Googles’.

This post is adapted from my March 24, 2019 newsletter. Subscribe today and receive an email copy of future posts in your inbox each week.

Filed Under: *All Posts, Being Part of Something Bigger than Yourself, Building Your Values into Your Work, Entrepreneurship, Work & Life Rewards Tagged With: entrepreneurship, ethics, frontier, future of cities, future of work, Google, Hudson Yards, innovation, King Street, pioneer, priorities, Quayside, Robert Kitchin, smart cities, Smart City, smart city initiatives, technology, Toronto, urban planning, value of personal data, values

These Tech Platforms Threaten Our Freedom

December 9, 2018 By David Griesing Leave a Comment

We’re being led by the nose about what to think, buy, do next, or remember about what we’ve already seen or done.  Oh, and how we’re supposed to be happy, what we like and don’t like, what’s wrong with our generation, why we work. We’re being led to conclusions about a thousand different things and don’t even know it.

The image that captures the erosion of our free thinking by influence peddlers is the frog in the saucepan. The heat is on, the water’s getting warmer, and by the time it’s boiling it’s too late for her to climb back out. Boiled frog, preceded by pleasantly warm and oblivious frog, captures the critical path pretty well. But instead of slow cooking, it’s shorter and shorter attention spans, the slow retreat of perspective and critical thought, and the final loss of freedom.

We’ve been letting the control booths behind the technology reduce the free exercise of our lives and work and we’re barely aware of it. The problem, of course, is that the grounding for good work and a good life is having the autonomy to decide what is good for us.

This kind of tech-enabled domination is hardly a new concern, but we’re wrong in thinking that it remains in the realm of science fiction.

An authority’s struggle to control our feelings, thoughts and decisions was the theme of George Orwell’s 1984, which was written 55 years before the fateful year that he envisioned. “Power,” said Orwell, “is in tearing human minds to pieces and putting them together again in new shapes of your own choosing.” Power persuades you to buy something when you don’t want or need it. It convinces you about this candidate’s, that party’s or some country’s evil motivations. It tricks you into accepting someone else’s motivations as your own. In 1984, free wills were weakened and constrained until they were no longer free. “If you want a picture of the future,” Orwell wrote, “imagine a boot stamping on a human face—for ever.”

Maybe this reflection of the present seems too extreme to you.

After all, Orwell’s jackbooted fascists and communists were defeated by our Enlightenment values. Didn’t the first President Bush, whom we buried this week, preside over some of it? The authoritarians were down and seemed out in the last decade of the last century—Freedom Finally Won!—which just happened to be the very same span of years when new technologies and communication platforms began to enable the next generation of dominators.

(There is no true victory over one man’s will to deprive another of his freedom, only a truce until the next assault begins.)

20 years later, in his book Who Owns the Future (2013), Jaron Lanier argued that a new battle for freedom must be fought against powerful corporations fueled by advertisers and other “influencers” who are obsessed with directing our thoughts today.

In exchange for “free” information from Google, “free” networking from Facebook, and “free” deliveries from Amazon, we open our minds to what Lanier calls “siren servers,” the cloud computing networks that drive much of the internet’s traffic. Machine-driven algorithms collect data about who we are to convince us to buy products, judge candidates for public office, or determine how the majority in a country like Myanmar should deal with a minority like the Rohingya.

Companies, governments, groups with good and bad motivations use our data to influence our future buying and other decisions on technology platforms that didn’t even exist when the first George Bush was president but now, only a few years later, seem indispensible to nearly all of our commerce and communication. Says Lanier:

When you are wearing sensors on your body all the time, such as the GPS and camera on your smartphone and constantly piping data to a megacomputer owned by a corporation that is paid by ‘advertisers” to subtly manipulate you…you are gradually becoming less free.

And all the while we were blissfully unaware that this was happening because the bath was so convenient and the water inside it seemed so warm. Franklin Foer, who addresses tech issues in The Atlantic and wrote 2017’s World Without Mind: The Existential Threat of Big Tech, talks about this calculated seduction in an interview he gave this week:

Facebook and Google [and Amazon] are constantly organizing things in ways in which we’re not really cognizant, and we’re not even taught to be cognizant, and most people aren’t… Our data is this cartography of the inside of our psyche. They know our weaknesses, and they know the things that give us pleasure and the things that cause us anxiety and anger. They use that information in order to keep us addicted. That makes [these] companies the enemies of independent thought.

The poor frog never understood that accepting all these “free” invitations to the saucepan meant that her freedom to climb back out was gradually being taken away from her.

Of course, we know that nothing is truly free of charge, with no strings attached. But appreciating the danger in these data driven exchanges—and being alert to the persuasive tools that are being arrayed against us—are not the only wake-up calls that seem necessary today. We also can (and should) confront two other tendencies that undermine our autonomy while we’re bombarded with too much information from too many different directions. They are our confirmation bias and what’s been called our illusion of explanatory depth.

Confirmation bias leads us to stop gathering information when the evidence we’ve gathered so far confirms the views (or biases) that we would like to be true. In other words, we ignore or reject new information, maintaining an echo chamber of sorts around what we’d prefer to believe. This kind of mindset is the opposite of self-confidence, because all we’re truly interested in doing outside ourselves is searching for evidence to shore up our egos.

Of course, the thought controllers know about our propensity for confirmation bias and seek to exploit it, particularly when we’re overwhelmed by too many opposing facts, have too little time to process the information, and long for simple black and white truths. Manipulators and other influencers have also learned from social science that our reduced attention spans are easily tricked by the illusion of explanatory depth, or our belief that we understand things far better than we actually do.

The illusion that we know more than we think we do extends to anything that we can misunderstand. It comes about because we consume knowledge widely but not deeply, and since that is rarely enough for understanding, our same egos claim that we know more than we actually do. For example, we all know that ignorant people are the most over-confident in their knowledge, but how easily we delude ourselves about the majesty of our own ignorance.  For example, I regularly ask people questions about all sorts of things that they might know about. It’s almost the end of the year as I write this and I can count on one hand the number of them who have responded to my questions by saying “I don’t know” over the past twelve months.  Most have no idea how little understanding they bring to whatever they’re talking about. It’s simply more comforting to pretend that we have all of this confusing information fully processed and under control.

Luckily, for confirmation bias or the illusion of explanatory depth, the cure is as simple as finding a skeptic and putting him on the other side of the conversation so he will hear us out and respond to or challenge whatever it is that we’re saying. When our egos are strong enough for that kind of exchange, we have an opportunity to explain our understanding of the subject at hand. If, as often happens, the effort of explaining reveals how little we actually know, we are almost forced to become more modest about our knowledge and less confirming of the biases that have taken hold of us.  A true conversation like this can migrate from a polarizing battle of certainties into an opportunity to discover what we might learn from one another.

The more that we admit to ourselves and to others what we don’t know, the more likely we are to want to fill in the blanks. Instead of false certainties and bravado, curiosity takes over—and it feels liberating precisely because becoming well-rounded in our understanding is a well-spring of autonomy.

When we open ourselves like this instead of remaining closed, we’re less receptive to, and far better able to resist, the “siren servers” that would manipulate our thoughts and emotions by playing to our biases and illusions. When we engage in conversation, we also realize that devices like our cell phones and platforms like our social networks are, in Foer’s words, actually “enemies of contemplation” which are” preventing us from thinking.”

Lanier describes the shift from this shallow tech-driven stimulus/response to a deeper assertion of personal freedom in a profile that was written about him in the New Yorker a few years back.  Before he started speaking at a South-by-Southwest Interactive conference, Lanier asked his audience not to blog, text or tweet while he spoke. He later wrote that his message to the crowd had been:

If you listen first, and write later, then whatever you write will have had time to filter through your brain, and you’ll be in what you say. This is what makes you exist. If you are only a reflector of information, are you really there?

Lanier makes two essential points about autonomy in this remark. Instead of processing on the fly, where the dangers of bias and illusions of understanding are rampant, allow what is happening “to filter through your brain,” because when it does, there is a far better chance that whoever you really are, whatever you truly understand, will be “in” what you ultimately have to say.

His other point is about what you risk becoming if you fail to claim a space for your freedom to assert itself in your lives and work. When you’re reduced to “a reflector of information,” are you there at all anymore or merely reflecting the reality that somebody else wants you to have?

We all have a better chance of being contented and sustained in our lives and work when we’re expressing our freedom, but it’s gotten a lot more difficult to exercise it given the dominant platforms that we’re relying upon for our information and communications today.

This post was adapted from my December 9, 2018 newsletter.

Filed Under: *All Posts, Building Your Values into Your Work, Continuous Learning, Work & Life Rewards Tagged With: Amazon, autonomy, communication, confirmation bias, facebook, Franklin Foer, free thinking, freedom, Google, illusion of explanatory depth, information, information overhoad, Jaron Lanier, tech, tech platforms, technology

Looking Out For the Human Side of Technology

October 28, 2018 By David Griesing Leave a Comment

Maintaining human priorities in the face of new technologies always feels like “a rearguard action.” You struggle to prevent something bad from happening even when it seems like it may be too late.

The promise of the next tool or system intoxicates us. Smart phones, social networks, gene splicing.  It’s the super-computer at our fingertips, the comfort of a boundless circle of friends, the ability to process massive amounts of data quickly or to short-cut labor intensive tasks, the opportunity to correct genetic mutations and cure disease. We’ve already accepted these promises before we pause to consider their costs—so it always feels like we’re catching up and may not have done so in time.

When you’re dazzled by possibility and the sun is in your eyes, who’s thinking “maybe I should build a fence?”

The future that’s been promised by tech giants like Facebook is not “the win-win” that we thought it was. Their primary objectives are to serve their financial interests—those of their founder-owners and other shareholders—by offering efficiency benefits like convenience and low cost to the rest of us. But as we’ve belattedly learned, they’ve taken no responsibility for the harms they’ve also caused along the way, including exploitation of our personal information, the proliferation of fake news and jeopardy to democratic processes, as I argued here last week.

Technologies that are not associated with particular companies also run with their own promise until someone gets around to checking them–a technology like artificial intelligence or AI for example. From an ethical perspective, we are usually playing catch up ball with them too. If there’s a buck to be made or a world to transform, the discipline to ask “but should we?” always seems like getting in the way of progress.

Because our lives and work are increasingly impacted, the stories this week throw additional light on the technology juggernaut that threatens to overwhem us and our “rearguard” attempts to tame it with our human concerns.

To gain a fuller appreciation of the problem regarding Facebook, a two-part Frontline doumentary will be broadcasting this week that is devoted to what one reviewer calls “the amorality” of the company’s relentless focus on adding users and compounding ad revenues while claiming to create the on-line “community” that all of us should want in the future.  (The show airs tomorrow, October 29 at 9 p.m. and on Tuesday, October 30 at 10 p.m. EST on PBS.)

Frontline’s reporting covers Russian election interference, Facebook’s role in whipping Myanmar’s Buddhists into a frenzy over its Rohingya minority, Russian interference in past and current election cycles, and how strongmen like Rodrigo Duterte in the Phillipines have been manipulating the site to achieve their political objectives. Facebook CEO Mark Zuckerberg’s limitations as a leader are explored from a number of directions, but none as compelling as his off-screen impact on the five Facebook executives who were “given” to James Jacoby (the documentary’s director, writer and producer) to answer his questions. For the reviewer:

That they come off like deer in Mr. Jacoby’s headlights is revealing in itself. Their answers are mealy-mouthed at best, and the defensive posture they assume, and their evident fear, indicates a company unable to cope with, or confront, the corruption that has accompanied its absolute power in the social median marketplace.

You can judge for yourself. You can also ponder whether this is like holding a gun manufacturer liable when one of its guns is used to kill somebody.  I’ll be watching “The Facebook Dilemma” for what it has to say about a technology whose benefits have obscured its harms in the public mind for longer than it probably should have. But then I remember that Facebook barely existed ten years ago. The most important lesson from these Frontline episodes may be how quickly we need to get the stars out of our eyes after meeting these powerful new technologies if we are to have any hope of avoiding their most significant fallout.

Proceed With Caution

I was also struck this week by Apple CEO Tim Cook’s explosive testimony at a privacy conference organized by the European Union.

Not only was Cook bolstering his own company’s reputation for protecting Apple users’ personal information, he was also taking aim at competitors like Google and Facebook for implementing a far more harmful business plan, namely, selling user information to advertisers, reaping billions in ad dollar revenues in exchange, and claiming the bargain is providing their search engine or social network to users for “free.” This is some of what Cook had to say to European regulators this week:

Our own information—from the everyday to the deeply personal—is being weaponized against us with military efficiency. Today, that trade has exploded into a data-industrial complex.

These scraps of data, each one harmless enough on its own, are carefully assembled, synthesized, traded, and sold. This is surveillance. And these stockpiles of personal data serve only to enrich the companies that collect them. This should make us very uncomfortable.

Technology is and must always be rooted in the faith people have in it. We also recognize not everyone sees it that way—in a way, the desire to put profits over privacy is nothing new.

“Weaponized” technology delivered with “military efficiency.” “A data-industrial complex.” One of the benefits of competition is that rivals call you out, while directing unwanted attention away from themselves. One of my problems with tech giant Amazon, for example, is that it lacks a neck-to-neck rival to police its business practices, so Cook’s (and Apple’s) motives here have more than a dollop of competitive self-interest where Google and Facebook are concerned. On the other hand, Apple is properly credited with limiting the data it makes available to third parties and rendering the data it does provide anonymous. There is a bit more to the story, however.

If data privacy were as paramount to Apple as it sounded this week, it would be impossible to reconcile Apple’s receiving more than $5 billion a year from Google to make it the default search engine on all Apple devices. However complicit in today’s tech bargains, Apple pushed its rivals pretty hard this week to modify their business models and become less cynical about their use of our personal data as the focus on regulatory oversight moves from Europe to the U.S.

Keeping Humans in the Tech Equation

Technologies that aren’t proprietary to a particular company but are instead used across industries require getting over additional hurdles to ensure that they are meeting human needs and avoiding technology-specific harms for users and the rest of us. This week, I was reading up on a positive development regarding artificial intelligence (AI) that only came about because serious concerns were raised about the transparency of AI’s inner workings.

AI’s ability to solve problems (from processing big data sets to automating steps in a manufacturing process or tailoring a social program for a particular market) is only as good as the algorithms it uses. Given concern about personal identity markers such as race, gender and sexual preference, you may already know that an early criticism of artificial intelligence was that an author of an algorithm could be unwittingly building her own biases into it, leading to discriminatory and other anti-social results.  As a result, various countermeasures are being undertaken to minimize grounding these kinds of biases in AI code. With that in mind, I read a story this week about another systemic issue with AI processing’s “explainability.”

It’s the so-called “black box” problem. If users of systems that depend on AI don’t know how they work, they won’t trust them. Unfortunately, one of the prime advantages of AI is that it solves problems that are not easily understood by users, which presents the quandary that AI-based systems might need to be “dumbed-down” so that the humans using them can understand and then trust them. Of course, no one is happy with that result.

A recent article in Forbes describes the trust problem that users of machine-learning systems experience (“interacting with something we don’t understand can cause anxiety and make us feel like we’re losing control”) along with some of the experts who have been feeling that anxiety (cancer specialists who agreed with a “Watson for Oncology” system when it confirmed their judgments but thought it was wrong when it failed to do so because they couldn’t understand how it worked).

In a positive development, a U.S. Department of Defense agency called DARPA (or Defense Advanced Research Projects Agency) is grappling with the explainability problem. Says David Gunning, a DARPA program manager:

New machine-learning systems will have the ability to explain their rationale, characterize their strengths and weaknesses, and convey an understanding of how they will behave in the future.

In other words, these systems will get better at explaining themselves to their users, thereby overcoming at least some of the trust issue.

DARPA is investing $2 billion in what it calls “third-wave AI systems…where machines understand the context and environment in which they operate, and over time build underlying explanatory models that allow them to characterize real word phenomena,” according to Gunning. At least with the future of warfare at stake, a problem like “trust” in the human interface appears to have stimulated a solution. At some point, all machine-learning systems will likely be explaining themselves to the humans who are trying to keep up with them.

Moving beyond AI, I’d argue that there is often as much “at stake” as sucessfully waging war when a specific technology is turned into a consumer product that we use in our workplaces and homes.

While there is heightened awareness today about the problems that Facebook poses, few were raising these concerns even a year ago despite their toxic effects. With other consumer-oriented technologies, there are a range of potential harms where little public dissent is being voiced despite serious warnings from within and around the tech industry. For example:

– how much is our time spent on social networks—in particular, how these networks reinforce or discourage certain of our behaviors—literally changing who we are?  
 
– since our kids may be spending more time with their smart phones than with their peers or family members, how is their personal development impacted, and what can we do to put this rabbit even partially back in the hat now that smart phone use seems to be a part of every child’s right of passage into adulthood? 
 
– will privacy and surveillance concerns become more prevalent when we’re even more surrounded than we are now by “the internet of things” and as our cars continue to morph into monitoring devices—or will there be more of an outcry for reasonable safeguards beforehand? 
 
– what are employers learning about us from our use of technology (theirs as well as ours) in the workplace and how are they using this information?

The technologies that we use demand that we understand their harms as well as their benefits. I’d argue our need to become more proactive about voicing our concerns and using the tools at our disposal (including the political process) to insist that company profit and consumer convenience are not the only measures of a technology’s impact.

Since invention of the printing press a half-millennia ago, it’s always been hard but necessary to catch up with technology and to try and tame its excesses as quickly as we can.

This post was adapted from my October 28, 2018 newsletter.

Filed Under: *All Posts, Building Your Values into Your Work, Continuous Learning Tagged With: Amazon, Apple, ethics, explainability, facebook, Google, practical ethics, privacy, social network harms, tech, technology, technology safeguards, the data industrial complex, workplace ethics

About David

David Griesing (@worklifeward) writes from Philadelphia.

Read More →

David Griesing Twitter @worklifereward

Subscribe to my Newsletter

Join all the others who have new posts, recommendations and links to explore delivered to their inboxes every week. You can read all published newsletters via the Index on the Subscribe Page.

My Forthcoming Book

WordLifeReward Book

Writings

  • *All Posts (215)
  • Being Part of Something Bigger than Yourself (106)
  • Being Proud of Your Work (33)
  • Building Your Values into Your Work (83)
  • Continuous Learning (74)
  • Daily Preparation (52)
  • Entrepreneurship (30)
  • Heroes & Other Role Models (40)
  • Introducing Yourself & Your Work (23)
  • The Op-eds (4)
  • Using Humor Effectively (14)
  • Work & Life Rewards (72)

Archives

Search this Site

Follow Me

David Griesing Twitter @worklifereward

Recent Posts

  • An Artist Needs to Write Us a Better Story About the Future March 9, 2023
  • Patagonia’s Rock Climber February 19, 2023
  • We May Be In a Neurological Mismatch with Our Tech-Driven World January 29, 2023
  • Reading Last Year and This Year January 12, 2023
  • A Time for Repair, for Wintering  December 13, 2022

Navigate

  • About
    • Biography
    • Teaching and Training
  • Blog
  • Book
    • WorkLifeReward
  • Contact
  • Privacy Policy
  • Subscribe to my Newsletter
  • Terms of Use

Copyright © 2023 David Griesing. All Rights Reserved.

  • Terms of Use
  • Privacy Policy