David Griesing | Work Life Reward Author | Philadelphia

  • Blog
  • About
    • Biography
    • Teaching and Training
  • Book
    • WorkLifeReward
  • Newsletter Archive
  • Contact
You are here: Home / Archives for facebook

Who’s Winning Our Tugs-of-War Over On-Line Privacy & Autonomy?

February 1, 2021 By David Griesing Leave a Comment

We know that our on-line privacy and autonomy (or freedom from outside control) are threatened in two, particularly alarming ways today. There are the undisclosed privacy invasions that occur from our on-line activities and the loss of opportunities where we can speak our minds without censorship.

These alarm bells ring because of the dominance of on-line social media platforms like Facebook, YouTube and Twitter and text-based exchanges like What’s App and the other instant messaging services—most of which barely existed a decade ago. With unprecedented speed, they’ve become the town squares of modern life where we meet, talk, shop, learn, voice opinions and engage politically. But as ubiquitous and essential as they’ve become, their costs to vital zones of personal privacy and autonomy have caused a significant backlash, and this past week we got an important preview of where this backlash is likely to take us.

Privacy advocates worry about the harmful consequences when personal data is extracted from users of these platforms and services. They say our own data is being used “against us” to influence what we buy (the targeted ads that we see and don’t see), manipulate our politics (increasing our emotional engagement by showing us increasingly polarizing content), and exert control over our social behavior (by enabling data-gathering agencies like the police, FBI or NSA). Privacy advocates are also offended that third parties are monetizing personal data “that belongs to us” in ways that we never agreed to, amounting to a kind of theft of our personal property by unauthorized strangers.

For their part, censorship opponents decry content monitors who can bar particular statements or even participation on dominant platforms altogether for arbitrary and biased reasons. When deprived of the full use of our most powerful channels of mass communication, they argue that their right to peaceably assemble is being eviscerated by what they experience as “a culture war” against them. 

Both groups say they have a privacy right to be left alone and act autonomously on-line: to make choices and decisions for themselves without undue influence from outsiders; to be free from ceaseless monitoring, profiling and surveillance; to be able to speak their minds without the threat of “silencing;” and, “to gather” for any lawful purpose without harassment. 

So how are these tugs-or-war over two of our most basic rights going?

This past week provided some important indications.

This week’s contest over on-line privacy pit tech giant Apple against rivals with business models that depend upon selling their users’ data to advertisers and other third parties—most prominently, Facebook and Google.

Apple announced this week that it would immediately start offering its leading smartphone users additional privacy protections. One relates to its dominant App Store and developers like Facebook, Google and the thousands of other companies that sell their apps (or platform interfaces) to iPhone users.

Going forward—on what Apple chief Tim Cook calls “a privacy nutrition label”—every app that the company offers for installation on its phones will need to share its data collection and privacy practices before purchase in ways that Apple will ensure “every user can understand and act on.” Instead of reading (and then ignoring) multiple pages of legalese, for the first time every new Twitter or YouTube user for example, will be able through their iPhones to either “opt-in” or refuse an app’s data collection practices after reading plain language that describes the personal data that will be collected and what will be done with it. In a similar vein, iPhone users will gain a second advantage over apps that have already been installed on their phones. With new App Tracking Transparency, iPhone users will be able to control how each app is gathering and sharing their personal data. For every application on your iPhone, you can now choose whether a Facebook or Google has access to your personal data or not.

While teeing up these new privacy initiatives at an industry conference this week, Apple chief Tim Cook was sharply critical of companies that take our personal data for profit, citing several of the real world consequences when they do so. I quote at length from his remarks last Thursday because I enjoyed hearing someone of Cook’s stature speaking to these issues so pointedly, and thought you might too:

A little more than two years ago…I spoke in Brussels about the emergence of a data-industrial complex… At that gathering we asked ourselves: “what kind of world do we want to live in?” Two years later, we should now take a hard look at how we’ve answered that question. 

The fact is that an interconnected ecosystem of companies and data brokers, of purveyors of fake news and peddlers of division, of trackers and hucksters just looking to make a quick buck, is more present in our lives than it has ever been. 

And it has never been so clear how it degrades our fundamental right to privacy first, and our social fabric by consequence.

As I’ve said before, ‘if we accept as normal and unavoidable that everything in our lives can be aggregated and sold, then we lose so much more than data. We lose the freedom to be human.’….

Together, we must send a universal, humanistic response to those who claim a right to users’ private information about what should not and will not be tolerated….

At Apple…, [w]e have worked to not only deepen our own core privacy principles, but to create ripples of positive change across the industry as a whole. 

We’ve spoken out, time and again, for strong encryption without backdoors, recognizing that security is the foundation of privacy. 

We’ve set new industry standards for data minimization, user control and on-device processing for everything from location data to your contacts and photos. 

At the same time that we’ve led the way in features that keep you healthy and well, we’ve made sure that technologies like a blood-oxygen sensor and an ECG come with peace of mind that your health data stays yours.

And, last but not least, we are deploying powerful, new requirements to advance user privacy throughout the App Store ecosystem…. 

Technology does not need vast troves of personal data, stitched together across dozens of websites and apps, in order to succeed. Advertising existed and thrived for decades without it. And we’re here today because the path of least resistance is rarely the path of wisdom. 

If a business is built on misleading users, on data exploitation, on choices that are no choices at all, then it does not deserve our praise. It deserves reform….

At a moment of rampant disinformation and conspiracy theories juiced by algorithms, we can no longer turn a blind eye to a theory of technology that says all engagement is good engagement — the longer the better — and all with the goal of collecting as much data as possible.

Too many are still asking the question, “how much can we get away with?,” when they need to be asking, “what are the consequences?” What are the consequences of prioritizing conspiracy theories and violent incitement simply because of their high rates of engagement? What are the consequences of not just tolerating, but rewarding content that undermines public trust in life-saving vaccinations? What are the consequences of seeing thousands of users join extremist groups, and then perpetuating an algorithm that recommends even more?….

[N]o one needs to trade away the rights of their users to deliver a great product. 

With its new “data nutrition labels” and “app tracking transparency,” many (if not most) of Apple’s iPhone users are likely to reject other companies’ data collection and sharing practices once they understand the magnitude of what’s being taken from them. Moreover, these votes for greater data privacy could be a major financial blow to the companies extracting our data because Apple sold more smartphones globally than any other vendor in the last quarter of 2020, almost half of Americans use iPhones (45.3% of the market according to one analyst), more people access social media and messaging platforms from their phones than from other devices, and the personal data pipelines these data extracting companies rely upon could start constricting immediately.   
 
In this tug-of-war between competing business models, the outcry this week was particularly fierce from Facebook, which one analyst predicts could start to take “a 7% revenue hit” (that’s real cash at $6 billion) as early as the second quarter of this year. (Facebook’s revenue take in 2020 was $86 billion, much of it from ad sales fueled by user data.) Mark Zuckerberg charged that Apple’s move tracks its competitive interests, saying its rival “has every incentive to use their dominant platform position to interfere with how our apps and other apps work,” among other things, a dig at on-going antitrust investigations involving Apple’s App Store. In a rare expression of solidarity with the little guy, Zuckerberg also argued that small businesses which access customers through Facebook would suffer disproportionately from Apple’s move because of their reliance on targeted advertising. 
 
There’s no question that Apple was flaunting its righteousness on data privacy this week and that Facebook’s “ouches” were the most audible reactions. But there is also no question that a business model fueled by the extraction of personal data has finally been challenged by another dominant market player. In coming weeks and months we’ll find out how interested Apple users are about protecting their privacy on their iPhones and whether their eagerness prompts other tech companies to offer similar safeguards. We’ll get signals from how advertising dollars are being spent as the “underlying profile data” becomes more limited and less reliable. We may also begin to see the gradual evolution of an on-line public space that’s somewhat more respectful of our personal privacy and autonomy.
 
What’s clearer today is that tech users concerned about the privacy of their data and freedom from data-driven manipulation on-line can now limit at least some of the flow of that information to unwelcome strangers in ways that they never had at their disposal before.

All of us should be worried about censorship of our views by content moderators at private companies (whether in journalism or social media) and by governmental authorities that wish to stifle dissenting opinions.  But many of the strongest voices behind regulating the tech giants’ penchant “to moderate content” today come from those who are convinced that press, media and social networking channels both limit access to and censor content from those who differ with “their liberal or progressive points of view.” Their opposition speaks not only to the extraordinary dominance of these tech giants in the public square today but also to the air of grievance that colors the political debates that we’ve been having there.
 
Particularly after President Trump’s removal from Facebook and Twitter earlier this month and the temporary shutdown of social media upstart Parler after Amazon cut off its cloud computing services, there has been a concerted drive to find new ways for individuals and groups to communicate with one another on-line in ways that cannot be censored or “de-platformed” altogether. Like the tug-of-war over personal data privacy, a new polarity over on-line censorship and the ways to get around it could fundamentally alter the character of our on-line public squares.
 
Instead of birthing a gaggle of new “Right-leaning” social media companies with managers who might still be tempted to interfere with irritating content, blockchain software technology is now being utilized to create what amount to “moderation-proof” communication networks.
 
To help with basic blockchain mechanics, this is how I described it here in 2018.

A blockchain is a web-based chain of connections, most often with no central monitor, regulator or editor. Its software applications enable every node in its web of connections to record data which can then be seen and reviewed by every other connection. It maintains its accuracy through this transparency. Everyone with access can see what every other connection has recorded in what amounts to a digital ledger…

Blockchain-based software can be launched by individuals, organizations or even governments. Software access can be limited to a closed network of participants or open to everyone. A blockchain is usually established to overcome the need for and cost of a “middleman” (like a bank) or some other impediment (like currency regulations, tariffs or burdensome bureaucracy). It promotes “the freer flow” of legal as well as illegal goods, services and information. Blockchain is already driving both modernization and globalization. Over the next several years, it will also have profound impacts on us as individuals. 

If you’d gain from a visual description, this short video from The MIT Technology Review will also show you the basics about this software innovation.  
 
I’ve written several times before about the promise of blockchain-driven systems. For example, Your Work is About to Change Forever (about a bit-coin-type financial future without banks or traditional currencies); Innovation Driving Values (how secure and transparent recording of property rights like land deeds can drive economic progress in the developing world); Blockchain Goes to Work (how this software can enable gig economy workers to monetize their work time in a global marketplace); Data Privacy & Accuracy During the Coronavirus (how a widely accessible global ledger that records accurate virus-related information can reduce misinformation); and, with some interesting echoes today, a 2017 post called Wish Fulfillment (about why a small social media platform called Steem-It was built on blockchain software).    
 
Last Tuesday, the New York Times ran an article titled: They Found a Way to Limit Big Tech’s Power: Using the Design of Bitcoin. That “Design” in the title was blockchain software. The piece highlighted:

a growing movement by technologists, investors and everyday users to replace some of the internet’s basic building blocks in ways that would be harder for tech giants like Facebook or Google [or, indeed, anyone outside of these self-contained platforms] to control.

Among other things, the article described how those “old” internet building blocks would be replaced by blockchain-driven software, enabling social media platforms that would be the successors to the one that Steem-It built several years ago. However, while Steem-It wanted to provide a safe and reliable way to pay contributors for their social media content, in this instance the over-riding drive is “to make it much harder for any government or company to ban accounts or delete content.” 

It’s both an intoxicating and a chilling possibility.

While the Times reporter hinted about the risks with ominous quotes and references to the creation of “a decentratlized web of hate,” it’s worth noting that nothing like it has materialized, yet. Also implied but never discussed was the urgency that many feel to avoid censorship of their minority viewpoints by people like Twitter’s Jack Dorsey or even the New York Times editors who effectively decide what to report on and what to ignore. So what’s the bottom line in this tech-enabled tug-of-war between political forces?

The public square that we occupy daily—for communication and commerce, family connection and dissent—a public square that the dominant social media platforms largely provide, cannot (and must not) be governed by @Jack, the sensibilities of mainstream media, or any group of esteemed private citizens like Facebook’s recently appointed Oversight Board. One of the most essential roles of government is to maintain safety and order in, and to set forth the rules of the road for, our public square. Because blockchain-enabled social networks will likely be claiming more of that public space in the near future—even as they strive to evade its common obligations through encryption and otherwise—government can and should enforce the rules for this brave new world.

Until now, our government has failed to confront either on-line censorship or its foreseeable consequences. Because our on-line public square has become (in a few short years) as essential to our way of life as our electricity or water, its social media and similar platforms should be licensed and regulated like those basic services, that is, like utilities—not only for our physical safety but also for the sake of our democratic institutions, which survived their most recent tests but may not survive their next ones if we fail to govern ourselves and our awesome technologies more responsibly.

In this second tug-of-war, we don’t have a moment to lose.

This post was adapted from my January 31, 2021 newsletter. Newsletters are delivered to subscribers’ in-boxes every Sunday morning. You can sign up by leaving your email address in the column to the right.

Filed Under: *All Posts, Being Part of Something Bigger than Yourself Tagged With: app tracking transparency, Apple, autonomy, blockchain, censorship, commons, content monitoring, facebook, freedom of on-line assembly, human tech, privacy, privacy controls, privacy nutrition label, public square, social media platforms

We’re All Acting Like Dogs Today

July 29, 2019 By David Griesing Leave a Comment

Saul Steinberg in the New Yorker, January 12, 1976

I recently read that dogs—through the imperatives of evolution—have developed expressions that invite positive interactions from their humans like “What a good boy” or a scratch behind the ears whenever, say, Wally looks me in the eye with a kind of urgency. There’s urgency all right, because he’s after something more than just my words or my touch. 
 
The reward that he really wants comes after this hoped-for interaction. It’s a little squirt of oxytocin, a hormone and neuropeptide that strengthens bonding by making me, and then both of us together, feel good about our connection.
 
As you might expect, Wally looks at me a lot when I’m working and I almost always respond. How could anyone refuse that face? Besides, it gives whatever workspace I’m in a positive charge that can linger all day.

Social media has also learned that “likes”—or almost any kind of interaction by other people (or machines) with our pictures and posts—produces similar oxytocin squirts in those who are doing the posting. We’re not just putting something out there; we’re after something that’s both measureable and satisfying in return.

Of course, the caution light flashes when social media users begin to crave bursts of chemical approval like Wally does, or “to feel rejected” when the “likes” aren’t coming fast enough. It’s a feedback loop “of craving to approval” that keeps us coming back for more. Will they like us at least as much, and maybe more than they did the last time I was here?  It’s the draw that always makes us stay on these social media platforms for longer than we want to and always keeps us coming back for more.

Social scientists have been telling us for years that craving approval for our contributions (along with not wanting to miss out) causes social media as well as cell-phone addiction in young people under 25. They are particularly susceptible to its lures because the pre-frontal cortex in their brains, the so-called seat of good judgment, is still developing. Of course, the ability to determine what’s good and bad for you is also underdeveloped in many older people too—I just never thought that included me.

So how I felt when I stopped my daily posting on Instagram three weeks ago came as a definite comeuppance. Until then I thought I had too much “good sense” to allow myself to be manipulated in these ways.

For the past 6 years, I’ve posted a photo on Instagram (or IG) almost every day. I told myself that regular picture-taking would make me look at the world more closely while, at the same time, making me better at capturing what I saw. It would give me a cache of visual memories about where I’d been and what I’d been doing, and posting on IG gave me a chance to share them with others.

In recent years, I’d regularly get around 50 “likes” for each photo along with upbeat comments from strangers in Yemen, Moscow and Beruit as well as from people I actually know. The volume and reach of approval wasn’t great by Rhianna standards, but as much as half of it would always come in the first few minutes after posting every day. I’d generally upload my images before getting out of bed in the morning, so for years now I’ve been starting my days with a series of “feel good” oxytocin bursts.

Of course, you know what happened next. My “cold turkey” from Instagram produced symptoms that felt exactly like withdrawal. It recalled the aftermath of cutting back on carbs a few years back or, after I was in the Coast Guard, nicotine. Noticeable. Physical. In the days that followed, I’d find myself repeatedly gazing over at my phone screen for notifications of likes or comments that were no longer coming. Or even worse, I’d explore identical-looking notifications for me to check other people’s pictures and stories, lures that felt like reminders of the boosts I was no longer getting. I felt “cut off” from something that had seemed both alive and necessary.

It’s one thing to read about social media or cell-phone addiction and accept it’s downsides as a mental exercise, quite another to feel withdrawal symptoms after quitting one of them.

Unlike the Food & Drug Administration, I did’t need anything more than my own clinical trial to tell me about the forces that were at play here, because at the same time that IG owner Mark Zuckerberg is engineering what feels like my addiction to his platform, he is also targeting me with ads for things (that I’m sorry to say) I realized I was wanting much more frequently. That’s because Instagram was learning all along what I was interested in whenever I hovered over one of its ads or followed an enticing link.

In other words, I’d been addicted to soften me up for buying stuff that IG had learned I’m likely to want in a retail exchange that effectively made both IG and Mark Zuckerberg the middleman in every sale. IG’s oxcytocin machine had turned me into a captive audience who’d been intentionally rendered susceptible to buying whatever IG was hawking. 

That seems both manipulative and underhanded to me.

It’s one thing to write about “loss of autonomy” to the on-line tech giants, it is another to have felt a measure of that loss.

So where does this leave me, or any of us?

How do lawmakers and regulators limit (or prevent) subtle but nonetheless real chemical dependency when it’s induced by a tech platform?

Is breaking the ad-based business models that turn so many of us into captive buyers even possible in a market system that has used advertising to stoke sales for more than 200 years? Can our consumer-oriented economy turn its back on what may be the most effective sales model ever invented?

To think that we are grappling with either of these questions today would be an illusion.

The U.S. Federal Trade Commission has just fined Facebook (which is IG’s owner) for failing to implement and enforce narrow privacy policies that it had promised to implement and enforce years ago. The FTC also mandated oversight of Zuckerberg personally. Unlike the CEOs of other public companies, because he has effective ownership control of Facebook, his board of directors can’t really hold his feet to the fire. But neither the fine nor this new oversight mechanism challenge the company’s underlying business model, which is to (1) induce an oxytocin dependency in its users; (2) gather their personal data while they are feeling good by satisfying their cravings; (3) sell their personal data to advertisers; and (4) profit from the ads that are aimed at users who either don’t know or don’t care that they are being seduced in this way.

Recently announced antitrust investigations are also aimed at different problems. The Justice Department, FTC and Congress will be questioning the size of companies like Facebook and their dominance among competitors. One remedy might break Facebook into smaller pieces (like undoing it’s 2012 purchase of Instagram). However, these investigations are not about challenging a business model that induces dependency in its users, eavesdrops on their personal behavior both on-site and off of it, and then turns them into consumers of the products on its shelves. The best that can be hoped for is that some of these dominant platforms may be cut down to size and have some of their anti-competitive practices curtailed.  

Even the data-privacy initiatives that some are proposing are unlikely to change this business model. Their most likely result is that users who want to restrict access to, and use of, their personal information will have to pay for the privilege of utilizing Facebook or Google or migrate to new privacy-protecting platforms that will be coming on-line. I profiled one of them, called Solid, on this page a few weeks back.

Since it looks like we’ll be stuck in this brave new world for awhile, why does it matter that we’re being misused in this way?

Personal behavior has always been influenced by whatever “the Jones” were buying or doing next door (if you were desperate enough to keep up with them). In high school you changed what you were wearing or who you were hanging out with if you wanted to be seen as one of the cool kids.  Realizing that your hero, James Bond, is wearing an Omega watch might make you want to buy one too. But the influence to buy or to imitate that I’m describing here with Instagram feels new, different and more invasive, like we’ve entered the realm of science fiction.

Social media companies like Facebook and Instagram are using psychological power, that we’ve more or less given them, to remove some of the freedom in our choices so that they, in turn, can make Midas kingdoms of money off of us. And perhaps their best trick of all is that you only feel the ache of dependency that kept you in their rabbit holes—and how they conditioned you to respond once you were in them—after you decide to leave.

Saul Steinberg in the New Yorker, November 16, 1968

Maybe the scariest part of this was my knowing better, but acquiescing anyway, for all of those years. 
 
It’s particularly alarming given my belief that autonomy (along with generosity) are the most important qualities that I have.
 
I guess I had to feel what had happened to me in order to understand the subtlety of my addiction, the loss of freedom that my cravings for connection had induced, and my susceptibility to being used, against my will, by strangers for their own, very different purposes.
 
By delivering “warm and fuzzies” every day and getting me to stay for their commercials, Instagram became my small experience of mind control and Big Brother.
 
Over the past few weeks, I see people looking for something in their phones and think differently about what they’re doing. That’s because I still feel some of the need for what they may be looking for too.
 
It gives a whole new meaning to “the dog days” this summer.

+ + +

I’d love to hear from you if you’ve had a similar experience with a social network like Facebook or Instagram. If we don’t end up talking before then, I’ll see you next week.

This post was adapted from my July 28, 2019 newsletter. When you subscribe, a new newsletter/post will be delivered to your inbox every Sunday morning.

Filed Under: *All Posts, Building Your Values into Your Work, Daily Preparation, Using Humor Effectively, Work & Life Rewards Tagged With: addiction and withdrawal, addiction to social media, Big Brother, dog days, facebook, Instagram, manipulation, mind control, oxytocin, prevention, regulation, safeguards, Saul Steinberg, seat of good judgment, social media, social networks

These Tech Platforms Threaten Our Freedom

December 9, 2018 By David Griesing Leave a Comment

We’re being led by the nose about what to think, buy, do next, or remember about what we’ve already seen or done.  Oh, and how we’re supposed to be happy, what we like and don’t like, what’s wrong with our generation, why we work. We’re being led to conclusions about a thousand different things and don’t even know it.

The image that captures the erosion of our free thinking by influence peddlers is the frog in the saucepan. The heat is on, the water’s getting warmer, and by the time it’s boiling it’s too late for her to climb back out. Boiled frog, preceded by pleasantly warm and oblivious frog, captures the critical path pretty well. But instead of slow cooking, it’s shorter and shorter attention spans, the slow retreat of perspective and critical thought, and the final loss of freedom.

We’ve been letting the control booths behind the technology reduce the free exercise of our lives and work and we’re barely aware of it. The problem, of course, is that the grounding for good work and a good life is having the autonomy to decide what is good for us.

This kind of tech-enabled domination is hardly a new concern, but we’re wrong in thinking that it remains in the realm of science fiction.

An authority’s struggle to control our feelings, thoughts and decisions was the theme of George Orwell’s 1984, which was written 55 years before the fateful year that he envisioned. “Power,” said Orwell, “is in tearing human minds to pieces and putting them together again in new shapes of your own choosing.” Power persuades you to buy something when you don’t want or need it. It convinces you about this candidate’s, that party’s or some country’s evil motivations. It tricks you into accepting someone else’s motivations as your own. In 1984, free wills were weakened and constrained until they were no longer free. “If you want a picture of the future,” Orwell wrote, “imagine a boot stamping on a human face—for ever.”

Maybe this reflection of the present seems too extreme to you.

After all, Orwell’s jackbooted fascists and communists were defeated by our Enlightenment values. Didn’t the first President Bush, whom we buried this week, preside over some of it? The authoritarians were down and seemed out in the last decade of the last century—Freedom Finally Won!—which just happened to be the very same span of years when new technologies and communication platforms began to enable the next generation of dominators.

(There is no true victory over one man’s will to deprive another of his freedom, only a truce until the next assault begins.)

20 years later, in his book Who Owns the Future (2013), Jaron Lanier argued that a new battle for freedom must be fought against powerful corporations fueled by advertisers and other “influencers” who are obsessed with directing our thoughts today.

In exchange for “free” information from Google, “free” networking from Facebook, and “free” deliveries from Amazon, we open our minds to what Lanier calls “siren servers,” the cloud computing networks that drive much of the internet’s traffic. Machine-driven algorithms collect data about who we are to convince us to buy products, judge candidates for public office, or determine how the majority in a country like Myanmar should deal with a minority like the Rohingya.

Companies, governments, groups with good and bad motivations use our data to influence our future buying and other decisions on technology platforms that didn’t even exist when the first George Bush was president but now, only a few years later, seem indispensible to nearly all of our commerce and communication. Says Lanier:

When you are wearing sensors on your body all the time, such as the GPS and camera on your smartphone and constantly piping data to a megacomputer owned by a corporation that is paid by ‘advertisers” to subtly manipulate you…you are gradually becoming less free.

And all the while we were blissfully unaware that this was happening because the bath was so convenient and the water inside it seemed so warm. Franklin Foer, who addresses tech issues in The Atlantic and wrote 2017’s World Without Mind: The Existential Threat of Big Tech, talks about this calculated seduction in an interview he gave this week:

Facebook and Google [and Amazon] are constantly organizing things in ways in which we’re not really cognizant, and we’re not even taught to be cognizant, and most people aren’t… Our data is this cartography of the inside of our psyche. They know our weaknesses, and they know the things that give us pleasure and the things that cause us anxiety and anger. They use that information in order to keep us addicted. That makes [these] companies the enemies of independent thought.

The poor frog never understood that accepting all these “free” invitations to the saucepan meant that her freedom to climb back out was gradually being taken away from her.

Of course, we know that nothing is truly free of charge, with no strings attached. But appreciating the danger in these data driven exchanges—and being alert to the persuasive tools that are being arrayed against us—are not the only wake-up calls that seem necessary today. We also can (and should) confront two other tendencies that undermine our autonomy while we’re bombarded with too much information from too many different directions. They are our confirmation bias and what’s been called our illusion of explanatory depth.

Confirmation bias leads us to stop gathering information when the evidence we’ve gathered so far confirms the views (or biases) that we would like to be true. In other words, we ignore or reject new information, maintaining an echo chamber of sorts around what we’d prefer to believe. This kind of mindset is the opposite of self-confidence, because all we’re truly interested in doing outside ourselves is searching for evidence to shore up our egos.

Of course, the thought controllers know about our propensity for confirmation bias and seek to exploit it, particularly when we’re overwhelmed by too many opposing facts, have too little time to process the information, and long for simple black and white truths. Manipulators and other influencers have also learned from social science that our reduced attention spans are easily tricked by the illusion of explanatory depth, or our belief that we understand things far better than we actually do.

The illusion that we know more than we think we do extends to anything that we can misunderstand. It comes about because we consume knowledge widely but not deeply, and since that is rarely enough for understanding, our same egos claim that we know more than we actually do. For example, we all know that ignorant people are the most over-confident in their knowledge, but how easily we delude ourselves about the majesty of our own ignorance.  For example, I regularly ask people questions about all sorts of things that they might know about. It’s almost the end of the year as I write this and I can count on one hand the number of them who have responded to my questions by saying “I don’t know” over the past twelve months.  Most have no idea how little understanding they bring to whatever they’re talking about. It’s simply more comforting to pretend that we have all of this confusing information fully processed and under control.

Luckily, for confirmation bias or the illusion of explanatory depth, the cure is as simple as finding a skeptic and putting him on the other side of the conversation so he will hear us out and respond to or challenge whatever it is that we’re saying. When our egos are strong enough for that kind of exchange, we have an opportunity to explain our understanding of the subject at hand. If, as often happens, the effort of explaining reveals how little we actually know, we are almost forced to become more modest about our knowledge and less confirming of the biases that have taken hold of us.  A true conversation like this can migrate from a polarizing battle of certainties into an opportunity to discover what we might learn from one another.

The more that we admit to ourselves and to others what we don’t know, the more likely we are to want to fill in the blanks. Instead of false certainties and bravado, curiosity takes over—and it feels liberating precisely because becoming well-rounded in our understanding is a well-spring of autonomy.

When we open ourselves like this instead of remaining closed, we’re less receptive to, and far better able to resist, the “siren servers” that would manipulate our thoughts and emotions by playing to our biases and illusions. When we engage in conversation, we also realize that devices like our cell phones and platforms like our social networks are, in Foer’s words, actually “enemies of contemplation” which are” preventing us from thinking.”

Lanier describes the shift from this shallow tech-driven stimulus/response to a deeper assertion of personal freedom in a profile that was written about him in the New Yorker a few years back.  Before he started speaking at a South-by-Southwest Interactive conference, Lanier asked his audience not to blog, text or tweet while he spoke. He later wrote that his message to the crowd had been:

If you listen first, and write later, then whatever you write will have had time to filter through your brain, and you’ll be in what you say. This is what makes you exist. If you are only a reflector of information, are you really there?

Lanier makes two essential points about autonomy in this remark. Instead of processing on the fly, where the dangers of bias and illusions of understanding are rampant, allow what is happening “to filter through your brain,” because when it does, there is a far better chance that whoever you really are, whatever you truly understand, will be “in” what you ultimately have to say.

His other point is about what you risk becoming if you fail to claim a space for your freedom to assert itself in your lives and work. When you’re reduced to “a reflector of information,” are you there at all anymore or merely reflecting the reality that somebody else wants you to have?

We all have a better chance of being contented and sustained in our lives and work when we’re expressing our freedom, but it’s gotten a lot more difficult to exercise it given the dominant platforms that we’re relying upon for our information and communications today.

This post was adapted from my December 9, 2018 newsletter.

Filed Under: *All Posts, Building Your Values into Your Work, Continuous Learning, Work & Life Rewards Tagged With: Amazon, autonomy, communication, confirmation bias, facebook, Franklin Foer, free thinking, freedom, Google, illusion of explanatory depth, information, information overhoad, Jaron Lanier, tech, tech platforms, technology

Looking Out For the Human Side of Technology

October 28, 2018 By David Griesing Leave a Comment

Maintaining human priorities in the face of new technologies always feels like “a rearguard action.” You struggle to prevent something bad from happening even when it seems like it may be too late.

The promise of the next tool or system intoxicates us. Smart phones, social networks, gene splicing.  It’s the super-computer at our fingertips, the comfort of a boundless circle of friends, the ability to process massive amounts of data quickly or to short-cut labor intensive tasks, the opportunity to correct genetic mutations and cure disease. We’ve already accepted these promises before we pause to consider their costs—so it always feels like we’re catching up and may not have done so in time.

When you’re dazzled by possibility and the sun is in your eyes, who’s thinking “maybe I should build a fence?”

The future that’s been promised by tech giants like Facebook is not “the win-win” that we thought it was. Their primary objectives are to serve their financial interests—those of their founder-owners and other shareholders—by offering efficiency benefits like convenience and low cost to the rest of us. But as we’ve belattedly learned, they’ve taken no responsibility for the harms they’ve also caused along the way, including exploitation of our personal information, the proliferation of fake news and jeopardy to democratic processes, as I argued here last week.

Technologies that are not associated with particular companies also run with their own promise until someone gets around to checking them–a technology like artificial intelligence or AI for example. From an ethical perspective, we are usually playing catch up ball with them too. If there’s a buck to be made or a world to transform, the discipline to ask “but should we?” always seems like getting in the way of progress.

Because our lives and work are increasingly impacted, the stories this week throw additional light on the technology juggernaut that threatens to overwhem us and our “rearguard” attempts to tame it with our human concerns.

To gain a fuller appreciation of the problem regarding Facebook, a two-part Frontline doumentary will be broadcasting this week that is devoted to what one reviewer calls “the amorality” of the company’s relentless focus on adding users and compounding ad revenues while claiming to create the on-line “community” that all of us should want in the future.  (The show airs tomorrow, October 29 at 9 p.m. and on Tuesday, October 30 at 10 p.m. EST on PBS.)

Frontline’s reporting covers Russian election interference, Facebook’s role in whipping Myanmar’s Buddhists into a frenzy over its Rohingya minority, Russian interference in past and current election cycles, and how strongmen like Rodrigo Duterte in the Phillipines have been manipulating the site to achieve their political objectives. Facebook CEO Mark Zuckerberg’s limitations as a leader are explored from a number of directions, but none as compelling as his off-screen impact on the five Facebook executives who were “given” to James Jacoby (the documentary’s director, writer and producer) to answer his questions. For the reviewer:

That they come off like deer in Mr. Jacoby’s headlights is revealing in itself. Their answers are mealy-mouthed at best, and the defensive posture they assume, and their evident fear, indicates a company unable to cope with, or confront, the corruption that has accompanied its absolute power in the social median marketplace.

You can judge for yourself. You can also ponder whether this is like holding a gun manufacturer liable when one of its guns is used to kill somebody.  I’ll be watching “The Facebook Dilemma” for what it has to say about a technology whose benefits have obscured its harms in the public mind for longer than it probably should have. But then I remember that Facebook barely existed ten years ago. The most important lesson from these Frontline episodes may be how quickly we need to get the stars out of our eyes after meeting these powerful new technologies if we are to have any hope of avoiding their most significant fallout.

Proceed With Caution

I was also struck this week by Apple CEO Tim Cook’s explosive testimony at a privacy conference organized by the European Union.

Not only was Cook bolstering his own company’s reputation for protecting Apple users’ personal information, he was also taking aim at competitors like Google and Facebook for implementing a far more harmful business plan, namely, selling user information to advertisers, reaping billions in ad dollar revenues in exchange, and claiming the bargain is providing their search engine or social network to users for “free.” This is some of what Cook had to say to European regulators this week:

Our own information—from the everyday to the deeply personal—is being weaponized against us with military efficiency. Today, that trade has exploded into a data-industrial complex.

These scraps of data, each one harmless enough on its own, are carefully assembled, synthesized, traded, and sold. This is surveillance. And these stockpiles of personal data serve only to enrich the companies that collect them. This should make us very uncomfortable.

Technology is and must always be rooted in the faith people have in it. We also recognize not everyone sees it that way—in a way, the desire to put profits over privacy is nothing new.

“Weaponized” technology delivered with “military efficiency.” “A data-industrial complex.” One of the benefits of competition is that rivals call you out, while directing unwanted attention away from themselves. One of my problems with tech giant Amazon, for example, is that it lacks a neck-to-neck rival to police its business practices, so Cook’s (and Apple’s) motives here have more than a dollop of competitive self-interest where Google and Facebook are concerned. On the other hand, Apple is properly credited with limiting the data it makes available to third parties and rendering the data it does provide anonymous. There is a bit more to the story, however.

If data privacy were as paramount to Apple as it sounded this week, it would be impossible to reconcile Apple’s receiving more than $5 billion a year from Google to make it the default search engine on all Apple devices. However complicit in today’s tech bargains, Apple pushed its rivals pretty hard this week to modify their business models and become less cynical about their use of our personal data as the focus on regulatory oversight moves from Europe to the U.S.

Keeping Humans in the Tech Equation

Technologies that aren’t proprietary to a particular company but are instead used across industries require getting over additional hurdles to ensure that they are meeting human needs and avoiding technology-specific harms for users and the rest of us. This week, I was reading up on a positive development regarding artificial intelligence (AI) that only came about because serious concerns were raised about the transparency of AI’s inner workings.

AI’s ability to solve problems (from processing big data sets to automating steps in a manufacturing process or tailoring a social program for a particular market) is only as good as the algorithms it uses. Given concern about personal identity markers such as race, gender and sexual preference, you may already know that an early criticism of artificial intelligence was that an author of an algorithm could be unwittingly building her own biases into it, leading to discriminatory and other anti-social results.  As a result, various countermeasures are being undertaken to minimize grounding these kinds of biases in AI code. With that in mind, I read a story this week about another systemic issue with AI processing’s “explainability.”

It’s the so-called “black box” problem. If users of systems that depend on AI don’t know how they work, they won’t trust them. Unfortunately, one of the prime advantages of AI is that it solves problems that are not easily understood by users, which presents the quandary that AI-based systems might need to be “dumbed-down” so that the humans using them can understand and then trust them. Of course, no one is happy with that result.

A recent article in Forbes describes the trust problem that users of machine-learning systems experience (“interacting with something we don’t understand can cause anxiety and make us feel like we’re losing control”) along with some of the experts who have been feeling that anxiety (cancer specialists who agreed with a “Watson for Oncology” system when it confirmed their judgments but thought it was wrong when it failed to do so because they couldn’t understand how it worked).

In a positive development, a U.S. Department of Defense agency called DARPA (or Defense Advanced Research Projects Agency) is grappling with the explainability problem. Says David Gunning, a DARPA program manager:

New machine-learning systems will have the ability to explain their rationale, characterize their strengths and weaknesses, and convey an understanding of how they will behave in the future.

In other words, these systems will get better at explaining themselves to their users, thereby overcoming at least some of the trust issue.

DARPA is investing $2 billion in what it calls “third-wave AI systems…where machines understand the context and environment in which they operate, and over time build underlying explanatory models that allow them to characterize real word phenomena,” according to Gunning. At least with the future of warfare at stake, a problem like “trust” in the human interface appears to have stimulated a solution. At some point, all machine-learning systems will likely be explaining themselves to the humans who are trying to keep up with them.

Moving beyond AI, I’d argue that there is often as much “at stake” as sucessfully waging war when a specific technology is turned into a consumer product that we use in our workplaces and homes.

While there is heightened awareness today about the problems that Facebook poses, few were raising these concerns even a year ago despite their toxic effects. With other consumer-oriented technologies, there are a range of potential harms where little public dissent is being voiced despite serious warnings from within and around the tech industry. For example:

– how much is our time spent on social networks—in particular, how these networks reinforce or discourage certain of our behaviors—literally changing who we are?  
 
– since our kids may be spending more time with their smart phones than with their peers or family members, how is their personal development impacted, and what can we do to put this rabbit even partially back in the hat now that smart phone use seems to be a part of every child’s right of passage into adulthood? 
 
– will privacy and surveillance concerns become more prevalent when we’re even more surrounded than we are now by “the internet of things” and as our cars continue to morph into monitoring devices—or will there be more of an outcry for reasonable safeguards beforehand? 
 
– what are employers learning about us from our use of technology (theirs as well as ours) in the workplace and how are they using this information?

The technologies that we use demand that we understand their harms as well as their benefits. I’d argue our need to become more proactive about voicing our concerns and using the tools at our disposal (including the political process) to insist that company profit and consumer convenience are not the only measures of a technology’s impact.

Since invention of the printing press a half-millennia ago, it’s always been hard but necessary to catch up with technology and to try and tame its excesses as quickly as we can.

This post was adapted from my October 28, 2018 newsletter.

Filed Under: *All Posts, Building Your Values into Your Work, Continuous Learning Tagged With: Amazon, Apple, ethics, explainability, facebook, Google, practical ethics, privacy, social network harms, tech, technology, technology safeguards, the data industrial complex, workplace ethics

Your Pictures Help Tell Your Story

June 30, 2012 By David Griesing 2 Comments

I’m moving from my ungainly house of 25 years to a flat in the sky. It’s a misty time, leaving the ground. Among other things, I’m saying goodbye to lots of flowers outside, and to many sweaty hours helping them grow.

The arc of this gorgeous spring has turned it into a long and very satisfying goodbye. Almost every day brings camera-phone pictures that will one day be joined into a visual feast of yard things from the final year.

(It also makes me sad to think that one day soon I may use these smiling flowers as part of a sales pitch – that advertising our home’s value in this way will turn these children of mine into pretty little prostitutes. Only over time does this saner parent admit the many contributions they still make, and how happy they’ll be if they can help me to find them another caregiver for those seasons when I’m gone.)

I’m outside again this morning, just returned from a conference where I kept being pulled into the orbit of people like Matteo Wyliyams (@mouselink) and Alan Weinkrantz (@alanweinkrantz) talking excitedly about how they are using their phones like wands to tell Stories That Enrich Their Own with Instagram.

Every picture you share tells some of your story, they said.

(A few weeks back this same flowering spring, the story was that Mark Zuckerberg determined the price he’d pay for that photo-sharing company by naming the pizza delivered into his living room negotiations “Facebook,” and then figuring out how big a “slice” of its value Instagram should command.)

A lot, they agreed. And worth every penny according to my new conference friends: way more than a thousand words.

(But for Instagram’s founders, the story never told and the pictures never shared were about how saying good-bye to a company you grow is not so different from saying good-bye to a flower. The irony: that we never got to see the play of light, or their unique point of view at that moment in time – and what it would have told us about them.)

Today I’m working on the final curation of my yard, and of my last days in it, through the many screens of nature around here.

I’m calling the pictures I’ve started sharing “screentests”.

TODAY’S PAPER – SELLING SOMETHING

They’re another part of my story.

Filed Under: *All Posts, Introducing Yourself & Your Work Tagged With: facebook, Instagram, play of light, point of view, screentests, selling a home

About David

David Griesing (@worklifeward) writes from Philadelphia.

Read More →

Subscribe to my Newsletter

Join all the others who have new posts, recommendations and links to explore delivered to their inboxes every week. Please subscribe below.

David Griesing Twitter @worklifereward

My Forthcoming Book

WordLifeReward Book

Search this Site

Recent Posts

  • Great Design Invites Delight, Awe June 4, 2025
  • Liberating Trump’s Good Instincts From the Rest April 21, 2025
  • Delivering the American Dream More Reliably March 30, 2025
  • A Place That Looks Death in the Face, and Keeps Living March 1, 2025
  • Too Many Boys & Men Failing to Launch February 19, 2025

Follow Me

David Griesing Twitter @worklifereward

Copyright © 2025 David Griesing. All Rights Reserved.

  • Terms of Use
  • Privacy Policy