David Griesing | Work Life Reward Author | Philadelphia

  • Blog
  • About
    • Biography
    • Teaching and Training
  • Book
    • WorkLifeReward
  • Subscribe to my Newsletter
  • Contact
You are here: Home / Archives for Ai

Citizens Will Decide What’s Important in Smart Cities

July 8, 2019 By David Griesing Leave a Comment

The norms that dictate the acceptable use of artificial intelligence in technology are in flux. That’s partly because the AI-enabled, personal data gathering by companies like Google, Facebook and Amazon has caused a spirited debate about the right of privacy that individuals have over their personal information. With your “behavioral” data, the tech giants can target you with specific products, influence your political views, manipulate you into spending more time on their platforms, and weaken the control that you have over your own decision-making.
 
In most of the debate about the harms of these platforms thus far, our privacy rights have been poorly understood.  In fact, our anything-but-clear commitments to the integrity of our personal information have enabled these tech giants to overwhelm our initial, instinctive caution as they seduced us into believing that “free” searches, social networks or next day deliveries might be worth giving them our personal data in return. Moreover, what alternatives did we have to the exchange they were offering?

  • Where were the privacy-protecting search engines, social networks and on-line shopping hubs?
  • Moreover, once we got hooked on to these data-sucking platforms, wasn’t it already too late to “put the ketchup back in the bottle” where our private information was concerned? Don’t these companies (and the data brokers that enrich them) already have everything that they need to know about us?

Overwhelmed by the draw of  “free” services from these tech giants, we never bothered to define the scope of the privacy rights that we relinquished when we accepted their “terms of service.”  Now, several years into this brave new world of surveillance and manipulation, many feel that it’s already too late to do anything, and even if it weren’t, we are hardly willing to relinquish the advantages of these platforms when they are unavailable elsewhere. 
 
So is there really “no way out”?  
 
A rising crescendo of voices is gradually finding a way, and they are coming at it from several different directions.
 
In places like Toronto (London, Helsinki, Chicago and Barcelona) policy makers and citizens alike are defining the norms around personal data privacy at the same time that they’re grappling with the potential fallout of similar data-tracking, analyzing and decision-making technologies in smart-city initiatives.
 
Our first stop today is to eavesdrop on how these cities are grappling with both the advantages and harms of smart-city technologies, and how we’re all learning—from the host of scenarios they’re considering—why it makes sense to shield our personal data from those who seek to profit from it.  The rising debate around smart-city initiatives is giving us new perspectives on how surveillance-based technologies are likely to impact our daily lives and work. As the risks to our privacy are played out in new, easy-to-imagine contexts, more of us will become more willing to protect our personal information from those who could turn it against us in the future.
 
How and why norms change (and even explode) during civic conversations like this is a topic that Cass Sunstein explores in his new book How Change Happens. Sunstein considers the personal impacts when norms involving issues like data privacy are in flux, and the role that understanding other people’s priorities always seems to play. Some of his conclusions are also discussed below. As “dataveillance” is increasingly challenged and we contextualize our privacy interests even further, the smart-city debate is likely to usher in a more durable norm regarding data privacy while, at the same time, allowing us to realize the benefits of AI-driven technologies that can improve urban efficiency, convenience and quality of life.
 
With the growing certainty that our personal privacy rights are worth protecting, it is perhaps no coincidence that there are new companies on the horizon that promise to provide access to the on-line services we’ve come to expect without our having to pay an unacceptable price for them.  Next week, I’ll be sharing perhaps the most promising of these new business models with you as we begin to imagine a future that safeguards instead of exploits our personal information. 

1.         Smart-City Debates Are Telling Us Why Our Personal Data Needs Protecting

Over the past 6 months, I’ve talked repeatedly about smart-city technologies and one of you reached out to me this week wondering:  “What (exactly) are these new “technologies”?”  (Thanks for your question, George!).  
 
As a general matter, smart-city technologies gather and analyze information about how a city functions, while improving urban decision-making around that new information. Throughout, these data-gathering,  analyzing, and decision-making processes rely on artificial intelligence. In his recent article “What Would It Take to Help Cities Innovate Responsibly With AI?” Eddie Copeland begins by describing the many useful things that AI enables us to do in this context: 

AI can codify [a] best practice and roll it out at scale, remove human bias, enable evidence-based decision making in the field, spot patterns that humans can’t see, optimise systems too complex for humans to model, quickly digest and interpret vast quantities of data and automate demanding cognitive activities.

In other words, in a broad range of urban contexts, a smart-city system with AI capabilities can make progressively better decisions about nearly every aspect of a city’s operations by gaining an increasingly refined understanding of how its citizens use the city and are, in turn, served by its managers.
 
Of course, the potential benefits of greater or more equitable access to city services as well as their optimized delivery are enormous. Despite some of the current hew and cry, a smart-cities future does not have to resemble Big Brother. Instead, it could liberate time and money that’s currently being wasted, permitting their reinvestment into areas that produce a wider variety of benefits to citizens at every level of government.
 
Over the past weeks and months, I’ve been extolling the optimism that drove Toronto to launch its smart-cities initiative called Quayside and how its debate has entered a stormy patch more recently. Amidst the finger pointing among Google affiliate Sidewalk Labs, government leaders and civil rights advocates, Sidewalk (which is providing the AI-driven tech interface) has consistently stated that no citizen-specific data it collects will be sold, but the devil (as they say) remains in the as-yet to be disclosed details. This is from a statement the company issued in April:

Sidewalk Labs is strongly committed to the protection and privacy of urban data. In fact, we’ve been clear in our belief that decisions about the collection and use of urban data should be up to an independent data trust, which we are proposing for the Quayside project. This organization would be run by an independent third party in partnership with the government and ensure urban data is only used in ways that benefit the community, protect privacy, and spur innovation and investment. This independent body would have full oversight over Quayside. Sidewalk Labs fully supports a robust and healthy discussion regarding privacy, data ownership, and governance. But this debate must be rooted in fact, not fiction and fear-mongering.

As a result of experiences like Toronto’s (and many others, where a new technology is introduced to unsuspecting users), I argued in last week’s post for longer “public ventilation periods” to understand the risks as well as rewards before potentially transformative products are launched and actually used by the public.
 
In the meantime, other cities have also been engaging their citizens in just this kind of information-sharing and debate. Last week, a piece in the New York Times elaborated on citizen-oriented initiatives in Chicago and Barcelona after noting that:

[t]he way to create cities that everyone can traverse without fear of surveillance and exploitation is to democratize the development and control of smart city technology.

While Chicago was developing a project to install hundreds of sensors throughout the city to track air quality, traffic and temperature, it also held public meetings and released policy drafts to promote a City-wide discussion on how to protect personal privacy. According to the Times, this exchange shaped policies that reduced, among other things, the amount of footage that monitoring cameras retained. For its part, Barcelona has modified its municipal procurement contracts with smart cities technology vendors to announce its intentions up front about the public’s ownership and control of personal data.
 
Earlier this year, London and Helsinki announced a collaboration that would enable them to share “best practices and expertise” as they develop their own smart-city systems. A statement by one driver of this collaboration, Smart London, provides the rationale for a robust public exchange:

The successful application of AI in cities relies on the confidence of the citizens it serves.
 
Decisions made by city governments will often be weightier than those in the consumer sphere, and the consequences of those decisions will often have a deep impact on citizens’ lives.
 
Fundamentally, cities operate under a democratic mandate, so the use of technology in public services should operate under the same principles of accountability, transparency and citizens’ rights and safety — just as in other work we do.

To create “an ethical framework for public servants and [a] line-of-sight for the city leaders,” Smart London proposed that citizens, subject matter experts, and civic leaders should all ask and vigorously debate the answers to the following 10 questions:

  • Objective– why is the AI needed and what outcomes is it intended to enable?
  • Use– in what processes and circumstances is the AI appropriate to be used?
  • Impacts– what impacts, good and bad, could the use of AI have on people?
  • Assumptions– what assumptions is the AI based on, and what are their iterations and potential biases?
  •  Data– what data is/was the AI trained on and what are their iterations and potential biases?
  • Inputs– what new data does the AI use when making decisions?
  • Mitigation– what actions have been taken to regulate the negative impacts that could result from the AI’s limitations and potential biases?
  • Ethics– what assessment has been made of the ethics of using this AI? In other words, does the AI serve important, citizen-driven needs as we currently understand those priorities?
  • Oversight– what human judgment is needed before acting on the AI’s output and who is responsible for ensuring its proper use?
  • Evaluation– how and by what criteria will the effectiveness of the AI in this smart-city system be assessed and by whom?

As stakeholders debate these questions and answers, smart-city technologies with broad-based support will be implemented while citizens gain a greater appreciation of the privacy boundaries they are protecting.
 
Eddie Copeland, who described the advantages of smart-city technology above, also urges that steps beyond a city-wide Q&A be undertaken to increase the awareness of what’s at stake and enlist the public’s engagement in the monitoring of these systems.  He argues that democratic methods or processes need to be established to determine whether AI-related approaches are likely to solve a specific problem a city faces; that the right people need to be assembled and involved in the decision-making regarding all smart-city systems; and that this group needs to develop and apply new skills, attitudes and mind-sets to ensure that these technologies maintain their citizen-oriented focus. 
 
As I argued last week, the initial ventilation process takes a long, hard time. Moreover, it is difficult (and maybe impossible) to conduct if negotiations with the technology vendor are on-going or that vendor is “on the clock.”
 
Democracy should have the space and time to be a proactive instead of reactive whenever transformational tech-driven opportunities are presented to the public.

(AP Photo/David Goldman)

2.         A Community’s Conversation Helps Norms to Evolve, One Citizen at a Time

I started this post with the observation that many (if not most) of us initially felt that it was acceptable to trade access to our personal data if the companies that wanted it were providing platforms that offered new kinds of enjoyment or convenience. Many still think it’s an acceptable trade. But over the past several years, as privacy advocates have become more vocal, leading jurisdictions have begun to enact data-privacy laws, and Facebook has been criticized for enabling Russian interference in the 2016 election and the genocide in Myanmar, how we view this trade-off has begun to change.  
 
In a chapter of his new book How Change Happens, legal scholar Cass Sunstein argues that these kinds of widely-seen developments:

can have a crucial and even transformative signaling effect, offering people information about what others think. If people hear the signal, norms may shift, because people are influenced by what they think other people think.

Sunstein describes what happens next as an “unleashing” process where people who never formed a full-blown preference on an issue like “personal data privacy (or were simply reluctant to express it because the trade-offs for “free” platforms seemed acceptable to everybody else), now become more comfortable giving voice to their original qualms. In support, he cites a remarkable study about how a norm that gave Saudi Arabian husbands decision-making power over their wives’ work-lives suddenly began to change when actual preferences became more widely known.

In that country, there remains a custom of “guardianship,” by which husbands are allowed to have the final word on whether their wives work outside the home. The overwhelming majority of young married men are privately in favor of female labor force participation. But those men are profoundly mistaken about the social norm; they think that other, similar men do not want women to join the labor force. When researchers randomly corrected those young men’s beliefs about what other young men believed, they became far more willing to let their wives work. The result was a significant impact on what women actually did. A full four months after the intervention, the wives of men in the experiment were more likely to have applied and interviewed for a job.

When more people either speak up about their preferences or are told that others’ inclinations are similar to theirs, the prevailing norm begins to change.
 
A robust, democratic process that debates the advantages and risks of AI-driven, smart city technologies will likely have the same change-inducing effect. The prevailing norm that finds it acceptable to exchange our behavioral data for “free” tech platforms will no longer be as acceptable as it once was. The more we ask the right questions about smart-city technologies and the longer we grapple as communities with the acceptable answers, the faster the prevailing norm governing personal data privacy will evolve.  
 
Our good work of citizens is to become more knowledgeable about the issues and to champion what is important to us in dialogue with the people who live and work along side of us. More grounds for protecting our personal information are coming out of the smart-cities debate and we are already deciding where new privacy lines should be drawn around us. 

This post was adapted from my July 7, 2019 newsletter. When you subscribe, a new newsletter/post will be delivered to your inbox every Sunday morning.

Filed Under: *All Posts, Being Part of Something Bigger than Yourself, Building Your Values into Your Work, Continuous Learning Tagged With: Ai, artificial intelligence, Cass Sunstein, dataveillance, democracy, how change happens, norms, personal data brokers, personal privacy, privacy, Quayside, Sidewalk Labs, smart cities, Smart City, surveillance capitalism, Toronto, values

New Starting Blocks for the Future of Work

March 10, 2019 By David Griesing Leave a Comment

(picture by Edson Chagas)

As a challenging future rushes towards us, I often wonder whether our democratic values will continue to provide a sound enough foundation for our lives and work.
 
In many ways, this “white-water world” is already here. As framed by John Seely Brown in a post last summer, it confronts us with knowledge that’s simply “too big to know” and a globe-spanning web of interconnections that seems to constantly alter what’s in front of us, like the shifting views of a kaleidoscope.
 
It’s a brave new world that:

– makes a fool out of the concept of mastery in all areas except our ability–or inability–to navigate [its] turbulent waters successfully;
 
– requires that we work in more playful and less pre-determined ways in an effort to keep up with the pace of change and harness it for a good purpose;
 
– demands workplaces where the process of learning allows the tinkerer in all of us “to feel safe” from getting it wrong until we begin to get it right;
 
– calls on us to treat technology as a toolbox for serving human needs as opposed to the needs of states and corporations alone;  and finally,
 
– requires us to set aside time for reflection “outside of the flux” so that we can consider the right and wrong of where we’re headed, commit to what we value, and return to declare those values in the rough and tumble of our work tomorrow.

In the face of these demands, the most straightforward question is whether we will be able to safeguard our personal wellbeing and continue to enjoy a prosperous way of life. Unfortunately, neither of these objectives seems as readily attainable as they once did.
 
When our democratic values (such as freedom and championing individual rights) no longer ensure our wellbeing and prosperity, those values get questioned and eventually challenged in our politics.
 
Last week, I wrote here about the dangerous risks—like addiction and behavioral modification—that our kids and others confront by spending too much screen time playing on-line games like Fortnite. Despite a crescendo of anecdotal evidence about the harms to boys in particular, the freedom-loving (and endlessly distracted) West seems stymied when it comes to deciding what to do about it. On the other hand, China easily moved from identifying the harm to its collective wellbeing to implementing time restrictions on the amount of on-line play. It was the Great Firewall’s ability to intervene quickly that prompted one observer to wonder how those of us in the so-called “first world” will respond to  “the spectacle of a civilisation founded [like China’s] on a very different package of values — but one that can legitimately claim to promote human flourishing more vigorously than their own”?
 
Meanwhile, in a Wall Street Journal essay last weekend, its authors documented the ability of authoritarian countries with capitalist economies to raise the level of prosperity enjoyed by their citizens in recent years. Not so long ago, the allure of West to the “second” and “third worlds” was that prosperity seemed to go hand-in-hand with democratic values and institutions. That conclusion is far less clear today. With rising prosperity in authoritarian nations like China and Vietnam—and the likelihood that there will soon be far more prosperous citizens in these countries than outside of them—the authors fretted that:

It isn’t clear how well democracy, without every material advantage on its side, will fare in the competition [between our very different value systems.]

With growing uncertainty about whether Western values and institutions can produce sufficient benefits for its citizens, and with “the white-water world” where we live and work challenging our navigational skills, it seems a good time to return to some questions that we’ve chewed on here before about “how we can best get ready for the challenges ahead of us.” 
 
Can the ways that we educate our kids (and retrain ourselves) enable us to proclaim our humanity, secure our self-worth, and continue to find a valued place for ourselves in the increasingly complex world of work? 
 
Can championing new teaching methods strengthen democratic values and deliver more of their promise to us in terms of wellbeing and prosperity than it seems we can count on today?
 
Are new and different classrooms the keys to our futures?

1.         You Treasure What You Measure

Until this week, I never considered that widely administered education tests would provide any of these answers—but I probably should have—because in a very real way, we treasure the aptitudes and skills, indeed everything that we take the time to measure. Gross national product, budget and trade deficits, unemployment rates, the 1% versus everyone else: what is most important to us is endlessly calculated, publicized and analyzed. We also value these measures because they help us decide what to do next, like stimulating the economy, cutting government programs, or implementing trade restrictions. Measures influence actions.
 
It’s much the same with the measures we obtain from the educational tests that we administer, and in this regard, no test today is more influential than the Programme for International Student Assessment or PISA. PISA was first given in 2000 in 32 countries, the first time that national education systems were evaluated and could be compared with one another. The test measured 15-year-olds scholastic performance in mathematics, science and reading. No doubt you’ve heard some of the results, including the United States’ disappointing placement in the middle of the international pack. The test is given every three years and in 2018, 79 countries and economies participated in the testing and data collection.
 
According to an article in on-line business journal Quartz this week, “the results…are studied by educators the way soccer fans obsess over the World Cup draw.” 
 
No one thinks more about the power of the PISA test, the information that it generates, and what additional feats it might accomplish than Andreas Schleicher, a German data scientist who heads the education division of the Organisation for Economic Cooperation and Development (OECD) which administers PISA worldwide.

Andreas Schleicher

Schleicher downplays the role that the PISA has played in shaming low performing countries, preferring the test’s role in mobilizing national leaders to care as much about teaching and learning as they do about economic measures like unemployment rates and workplace productivity. At the most basic level, PISA data has supported a range of conclusions, including that class size seems largely irrelevant to the learning experience and that what matters most in the classroom is “the quality of teachers, who need to be intellectually challenged, trusted, and have room for professional growth.”

Schleicher also views the PISA as a tool for liberating the world’s educational systems from their single-minded focus on subjects like science, reading and math and towards the kinds of “complex, interdisciplinary skills and mindsets” that are necessary for success in the future of work. We are afraid that human jobs will be automated but we are still teaching people to think like machines. “What we know is that the kinds of things that are easy to teach, and maybe easy to test, are precisely the kinds of things that are easy to digitize and to automate,” Schleicher says.

To help steer global education out of this rut, he has pushed for the design and administration of new, optional tests that complement the PISA. Change the parameters of the test, change the skills that are measured, and maybe the world’s education-based priorities will change too. Says Schleicher: “[t]he advent of AI [or artificial intelligence] should push us to think harder [about] what makes us human” and lead us to teach to those qualities, adding that if we are not careful, the world’s nations will be continue to educate “second-class robots and not first-class humans.”

Schleicher had this future-oriented focus years before the PISA was initially administered.

In 1997, Schleicher convened a group of representatives from OECD countries, not to discuss what could be tested, but what should be tested. The idea was to move beyond thinking about education as the driver of purely economic outcomes. In addition to wanting a country’s education system to provide a ready workforce, they also wondered whether they could nurture young people to help to make their societies more cohesive and democratic while reducing unfairness and inequality. According to Quartz:

The group identified three areas to explore: relational, or how we get along with others; self, how we regulate our emotions and motivate ourselves, and content, what schools need to teach.

Instead of simply enabling students to respond to the demands of a challenging world, Schleicher and others in his group wanted national testing to encourage the kinds of skill building that would enable young people to change the world they’d be entering for the better.   

Towards this end, Schleicher’s team began to develop assessments for independent thinking and the kinds of personal skills that contribute to it. The technology around test administration enabled the testers to see how students solved problems in real time, not simply whether they get them right or wrong. They gathered and shared data that enabled national education systems to “help students learn better and teachers teach better and schools to become more effective.”  Assessments of the skill sets around independent thinking encouraged countries to begin to see new possibilities and want to change how students learn in their classrooms. “If you don’t have a north star [like this], perhaps you limit your vision,” he says.

For the past twenty years, Schleicher’s north stars have also included students’ quest to find meaning in what they are doing and to exercise their agency in determining what and how they learn. He is convinced that people have the “capacity to imagine and build things of intrinsic positive worth.”  We have skills that robots cannot replace, like managing between black and white, integrating knowledge, and applying knowledge in unique situations. All of those skills can be tested (and encouraged), along with the skill that is most unique about human beings, namely:

our capacity to take responsibility, to mobilize our cognitive and social and emotional resources to do something that is of benefit to society. 

What Schleicher and his testing visionaries began to imagine in 1997 have been gradually introduced as optional tests that focus on problem-solving, collaborative problem-solving, and most recently, so-called “global competencies” like open-mindedness and the desire to improve the world. In 2021, another optional test will assess flexibility in thinking and habits of creativity, like being inquisitive and persistent.

One knowledgeable observer of these initiatives, Douglas Archibald, credits Schleicher with “dramatically elevating” the discussion about the future of education. “There is no one else bringing together people in charge of these educational systems to seriously think about how their systems [can be] future proofed,” says Archibald. But he and others also see a hard road ahead for Schleicher, with plenty of resistance from within the global education community.   

Some claim that he is asking more from a test than he should. Others claim his emphasis is fostering an over-reliance on testing over other priorities. Regarding the “global competencies” assessment for example, 40 of the 79 participating countries opted not to administer it. But Schleicher, much like visionaries in other fields, remains undaunted. Nearly half of the countries are exercising their option to assess “global competencies” and even more are administering the other optional tests that Schleicher has helped develop. Maybe educators are slowly becoming convinced that the threat to human work in a white-water world is too serious to be ignored any longer.

A view from Kenneth Robinson’s presentation: “Changing Education Paradigms”

While Schleicher and his allies are in the vanguard of those who are using a test to prompt a revolution in education, they are hardly the only ones to challenge a teaching model that, for far too long, has only sought to produce a dependable, efficient and easily replaceable workforce. The slide above is from Sir Kenneth Robinson’s much-heralded (and well-worth your taking a look at) 2010 video called “Changing Education Paradigms.” In it, he also champions teaching that enables uniquely human contributions that no machine can ever replace.
 
Schleicher, Robinson and others envision education systems that prepare young people (or re-engineer older ones) for a complex and ever shifting world where no one has to be overwhelmed by the glut of information or the dynamics of shifting networks but can learn how to navigate today’s challenges productively. They highlight and, by doing so, champion teaching methods that help to prepare all of us for jobs that provide meaning and a sense of wellbeing while amplifying and valuing our uniquely human contributions.

Schleicher is also helping to modify our behavior by championing skills like curiosity about others and empathy that can make us more engaged members of our communities and commit us to improving them. Assessing these skills in national education tests says both loudly and clearly that these skills are important for human flourishing too. Indeed, this may be Schleicher’s and OECD’s most significant contribution. Their international testing is encouraging the skills and changes in behavior that can build better societies, whether they are based on the democratic values of the West or the more collective and less individual ones of the East. 

That is no small thing. No small thing at all.

This post is adapted from my March 10, 2019 newsletter.

Filed Under: *All Posts, Being Part of Something Bigger than Yourself, Building Your Values into Your Work, Continuous Learning Tagged With: Ai, Andreas Schleicher, artificial intelligence, automation, democratic values, education, education testing, human flourishing, human work, OECD, PISA, Programme for International Student Assessment, skills assessment, values, work, workforce preparation

Running Into the Future of Work

January 13, 2019 By David Griesing Leave a Comment

We’ve just entered a new year and it’s likely that many of us are thinking about the opportunities and challenges we’ll be facing in the work weeks ahead. Accordingly, it seems a good time to consider what lies ahead with some forward-thinkers who’ve also been busy looking into the future of our work.
 
In an end-of-the-year article in Forbes called “Re-Humanizing Work: You, AI and the Wisdom of Elders,” Adi Gaskell links us up with three provocative speeches about where our work is headed and what we might do to prepare for it.  As he’s eager to tell us, his perspective on the people we need to be listening to is exactly where it needs to be:
 
“I am a free range human who believes that the future already exists, if we know where to look. From the bustling Knowledge Quarter in London, it is my mission in life to hunt down those things and bring them to a wider audience. I am an innovation consultant and writer, and…my posts will hopefully bring you complex topics in an easy to understand form that will allow you to bring fresh insights to your work, and maybe even your life.”
 
I’ve involuntarily enlisted this “free-range human” as my guest curator for this week’s post. 
 
In his December article, Gaskell profiles speeches that were given fairly recently by John Hagel, co-chair of Deloitte’s innovation center speaking at a Singularity University summit in Germany; Nobel Prize-winning economist Joseph Stiglitz speaking at the Royal Society in London; and Chip Conley an entrepreneur and self-proclaimed “disrupter” speaking to employees at Google’s headquarters last October. In the discussion that follows, I’ll provide video links to their speeches so you can consider what they have to say for yourselves along with “my take-aways” from some of their advice. 
 
We are all running into the future of our work. As the picture above suggests, some are confidently in the lead while others of us (like that poor kid in the red shirt) may simply be struggling to keep up. It will be a time of tremendous change, risk and opportunity and it won’t be an easy run for any of us. 
 
My conviction is that forward movement at work is always steadier when you are clear about your values, ground your priorities in your actions, and remain aware of the choices (including the mistakes) that you’re making along the way. Hagel, Stiglitz and Conley are all talking about what they feel are the next necessary steps along this value-driven path.

1.         The Future of Work– August 2017

When John Hagel spoke about the future of work at a German technology summit, he was right to say that most people are gripped by fear. We’re “in the bulls-eye of technology” and paralyzed by the likelihood that our jobs will either be eliminated or change so quickly that we will be unable to hold onto them. However, Hagel goes on to argue (persuasively I think) that the same machines that could replace or reduce our work roles could just as likely become “the catalysts to help us restore our humanity.”  
 
For Hagel, our fears about job elimination and the inability of most workers to avoid this looming joblessness are entirely justified.  That’s because today’s economy—and most of our work—is aimed at producing what he calls “scalable efficiency.”  This economic model relentlessly drives the consolidation of companies while replacing custom tasks with standardized ones wherever possible for the sake of the bottom line.
 
Because machines can do nearly everything more efficiently than humans can, our concerns about being replaced by robots and the algorithms that guide them are entirely warranted. And it is not just lower skilled jobs like truckers that will be eliminated en masse. Take a profession like radiology. Machines can already assess the data on x-rays more reliably than radiologists. More tasks that are performed by professionals today will also be performed by machines tomorrow. 
 
Hagel notes that uniquely human aptitudes like curiosity, creativity, imagination, and emotional intelligence are discouraged in a world of scalable efficiency but (of course) it is in this direction that humans will be most indispensible in the future of work. How do we build the jobs of the future around these aptitudes, and do we even want to?
 
There is a long-standing presumption that most workers don’t want to be curious, creative or imaginative problem-solvers on the job. We’ve presumed that most workers want nothing more than a highly predictable workday with a reliable paycheck at the end of it. But Hagel asks, is this really all we want, or have our educations conditioned us to fit (like replaceable cogs) into an economy that’s based on the scalable efficiency of its workforce? He argues that if you go to any playground and look at how pre-schoolers play, you will see the native curiosity,  imagination and inventiveness before it has been bred out of them by their secondary, college and graduate school educations. 
 
So how do companies reconnect us to these deeply human aptitudes that will be most valued in the future of work? Hagel correctly notes that business will never make the massive investment in workforce retraining that will be necessary to recover and re-ignite these problem-solving skills in every worker. Moreover, the drive for scalable efficiency and cost-cutting in most companies will overwhelm whatever initiatives do manage to make it into the re-training room. 
 
Hagel’s alternative roadmap is for companies that are committed to their human workforce to invest in what he calls “the scalable edges” of their business models. These are the discrete parts of any business that have “the potential to become the new core of the institution”—that area where a company is most likely to evolve successfully in the future. Targeted investments in a problem-solving human workforce at these “scalable edges” today will produce a problem-solving workforce that can grow to encompass the entire company tomorrow.

By focusing on worker retraining at a company’s most promising “edges,” Hagel strategically identifies a way to counter the “scalable efficiency” models that will continue to eliminate jobs but refuse to make the investment that’s required to retrain everyone. While traditional jobs will continue to be lost during this transition, and millions of employees will still lose their jobs, Hagel’s approach ensures an eventual future that is powered by human jobs that machines cannot do today and may never be able to do. For him, it’s the fear of machines that drives us to a new business model that re-engages the humanity that we lost in school in the workplace.
 
I urge you to consider the flow of Hagel’s arguments for yourself. For more of his ideas, a prior newsletter discusses a Harvard Business Review article (which he co-wrote with John Seely Brown) about the benefits of learning that can “scale up.” A closely related post that examines Brown’s commencement address about navigating “the white-water world of work today” can be found here.
 
*My most important take-aways from Hagel’s talk: Find the most promising, scalable edges of the jobs Im doing.  Hone the creative, problem-solving skills that will help me the most in realizing the goals I have set for myself in those jobs. Maintain my continuing value in the workplace by nurturing the skills that machines can never replace.

2.         AI and Us– September 2018

Columbia University economist Joseph Stiglitz begins his talk at London’s Royal Society with three propositions. The first is that artificial intelligence and machine learning are likely to change the labor market in an unprecedented way because of the sheer extent of their disruption. His second proposition is that economic markets do not self-correct in a way that either preserves employment or creates new jobs down the road. His third proposition—and perhaps the most important one—is that there is an inherent “dignity to work” that necessitates government policies that enable everyone who wants to work to have the opportunity to do so.
 
I agree with each of these propositions, particularly his last one. So if you asked me, the way that Stiglitz was asked by a member of the audience at the end of his talk, about whether he supported governments providing their citizens with “a universal basic income” to offset job elimination as many progressives are proposing, his answer (and mine) would “No.” Instead, we’d argue that governments should be fostering the economic circumstances where everyone who wants to work has the opportunity to do so. It is this opportunity to be productive—and not a new government handout—that rises to the level of basic human right.
 
Stiglitz argues that new artificial intelligence technologies along with 50 years of hands-off government policies about regulating business (beginning with Reagan in the US and Thatcher in the UK) have been creating smaller “national pies” that are shared with fewer of their citizens.  In a series of charts, he documents the rise of income inequality by showing how wages and economic productivity rose together in most Western economies until the 1980s and have diverged ever since. Labor’s share in the pie has consistently decreased in this timeframe and new technologies like AI are likely to reduce it to even more worrisome levels.
 
Stiglitz’ proposed solutions include policy making that encourages full employment in addition to fending off inflation, reducing the monopoly power that many businesses enjoy because monopoly restricts the flow of labor, and enacting rules that strengthen workers’ collective bargaining power. 
 
Stiglitz is not a spellbinding speaker, but he is imminently qualified to speak about how the structure of the economy and the policies that maintain it affect the labor markets. You can follow his trains of thought right into the lively Q&A that follows his remarks via the link above. For my part, I’ve been having a continuous conversation about the monopoly power of tech companies like Amazon and the impact of unrestricted power on jobs in newsletter posts like this one from last April as well as on Twitter if you are interested in diving further into the issue.    
 
*My most important take-aways from Stiglitz’ remarks were as follows: since I care deeply about the dignity that work confers, I need (1) to be involved in the political process; (2) to identify and argue in favor of policies that support workers and, in particular, every worker’s opportunity to have a job if she wants one; and (3) to support politicians who advance these policies and oppose those who erroneously claim that when business profits, it follows that we all do.

3.         The Making of a Modern Elder – October 2018
 
The pictures above suggest the run we’re all on towards the future of work. What these pictures don’t convey as accurately are the ages of the runners. This race includes everyone who either wants or needs to keep working into the future.
 
Chip Conley’s recent speech at Google headquarters is about how a rapidly aging demographic is disrupting the future workforce and how both businesses and younger workers stand to benefit from it. For the first time in American history, there are more people over age 65 than under age 15. With a markedly different perspective, Conley discusses several of the opportunities for companies when their employees work longer as well as how to improve the intergenerational dynamics when as many as five different generations are working together in the same workplace.
 
Many of Conley’s insights come from his mentoring of Brian Chesky, the founder of AirBnB, and how he brought what he came to call “elder wisdom” to not only Chesky but also AirBnB’s youthful workforce. Conley begins his talk by referencing our long-standing belief that work teams with gender and race diversity tend to be more successful than less diverse teams, which has led companies to support them. However, Conley notes that only 8% of these same companies actively support age diversity.
 
To enlist that support, he argues that age diversity adds tremendous value at a time of innovation and rapid change because older workers have both perspective and organizational abilities that younger workers lack. Moreover, these older workers comprise an increasingly numerous group, anywhere from age 35 at some Silicon Valley companies to age 75 and beyond in less entrepreneurial industries. What “value” do these older workers provide, and how do you get employers to recognize it?
 
Part of the answer comes from a changing career path that no longer begins with learning, peaks with earning, and concludes with retirement. For nearly all workers, your ability to evolve, learn, collaborate and counsel others play roles that are continuously being renegotiated throughout your career. For example, as workers age, they may bring new kinds of value by sharing their institutional knowledge with the group, by understanding less of the technical information but more about how to help the group become more productive, and by asking “why” or “what if” questions instead of “how” or simply “what do we do now” in group discussions. Among other things, that is because older workers spend the first half of their careers accumulating knowledge, skills and experience and the second half editing what they have accumulated (namely what is more and less important) given the perspective they have gained.  
 
When you listen to Conley’s talk, make sure that you stay tuned until the Q&A, which includes some of his strongest insights.
 
*My most important take-aways from his remarks all involve how older workers can continuously establish their value in the workplace. To do so, older workers must (1) right-size their egos about what they don’t know while maintaining confidence in the wisdom they have to offer; (2) commit to continuous learning instead of being content with what they already know; (3) become more interested and curious instead of assuming that either their age or experience alone will make them interesting; and (4) demonstrate their curiosity publically, listen carefully to where those around them are coming from, and become generous at sharing their wisdom with co-workers privately.  When we do, companies along with their younger workers will come to value their trusted elders.

* * *

 This has been a wide-ranging discussion. I hope it has given you some framing devices to think about your jobs as an increasingly disruptive future rushes in your direction. We are all running with the wind in our faces while trying to get the lay of the land below our feet in this brave new world of work.

Note: this post is adapted from my January 13, 2019 newsletter.

Filed Under: *All Posts, Continuous Learning, Entrepreneurship Tagged With: aging workforce, Ai, artificial intelligence, Chip Conley, dignity of work, elder wisdom, future of work, John Hagel, Joseph Stiglitz, labor markets, machine learning, monopoly power, value of older workers, work, workforce disruption, workforce retraining

Good Work Uses Innovation to Drive Change

July 29, 2018 By David Griesing Leave a Comment

Welcome to the “white-water world”—a world that is rapidly changing, hyper-connected and radically contingent on forces beyond our control.

The social environment where we live and work today:

– makes a fool out of the concept of mastery in all areas except our ability–or inability–to navigate these turbulent waters successfully (the so-called “caring” professions may be the only exception);

– requires that we work in more playful and less pre-determined ways in an effort to to keep up with the pace of change and harness it for a good purpose;

– demands workplaces where the process of learning allows the tinkerer in all of us “to feel safe” from getting it wrong until we begin to get it right;

– calls on us to treat technology as a toolbox for serving human needs as opposed to the needs of states and corporations alone;  and finally,

– this world requires us to set aside time for reflection “outside of the flux” so that we can consider the right and wrong of where we’re headed, commit to what we value, and return to declare those values in the rough and tumble of our work tomorrow.

You’ve heard each of these arguments here before. Today, they get updated and expanded in a commencement address that was given last month by John Seely Brown. He was speaking to graduate students receiving degrees that they hope will enable them to drive public policy through innovation. But his comments apply with equal force to every kind of change–small changes as well as big ones–that we’re pursuing in our work today.

When you reach the end, I hope you’ll let me know how Brown’s approach to work relates to the many jobs that are still ahead of you.

Good Work Uses Innovation to Drive Change

John Seely Brown is 78 now. It seems that he’s never stopped trying to make sense out of the impacts that technology has on our world or how we can use these extraordinary tools to make the kind of difference we want to make.

Brown is currently independent co-chairman of the Center for the Edge, an incubator of ideas that’s associated with the global consulting firm Deloitte. In a previous life, he was the chief scientist at Xerox and the director of its Palo Alto Research Center (or PARC). Brown speaks, writes and teaches to provoke people to ask the right questions. He stimulates our curiosity by defining the world in simple, practical terms that are easy to understand but more difficult to confront. As a result, he also wants to share his excitement and optimism so that our own questioning yields solutions that make the most out of these challenges and opportunities.

He begins his commencement address with quotes from two books that frame the challenge as he sees it.

KNOWLEDGE IS TOO BIG TO KNOW

We used to know how to know. We got our answers from books or experts. We’d nail down the facts and move on. We even had canons . . . But in the Internet age, knowledge has moved onto networks. There’s more knowledge than ever, but it’s different. Topics have no boundaries, and nobody agrees on anything.  (from “Too Big To Know” by David Weinberger)

A WEB OF CONNECTIONS CHANGES EVERYTHING

The seventh sense is the ability to look at any object and see (or imagine) the way in which it is changed by connection–whether you are commanding an army, running a Fortune 500 company, planning a great work of art, or thinking about your child’s education. (from “The Seventh Sense” by Joshua Cooper Remo)

These realities about knowledge and connection impact not only how we think (research, practice, and create) but also how we feel (love, hate, trust and fear). Brown analogizes the challenge to navigating “a white water world” that requires particular kinds of virtuosity. That virtuosity includes:

– reading the currents and disturbances around you;

– interpreting the flows for what they reveal about what lies beneath the surface; and

– leveraging the currents, disturbances and flows for amplified action.

In short, you need to gain the experience, reflexes and opportunism of a white-water rafter to make the most out of your work today.

Becoming Entrepreneurial Learners

To confront the world like a white-water rafter, Brown argues—in a kind of call to arms—that each graduate (and by implication, each one of us too) needs to be a person whose work:

Is always questing, connecting, probing.

Is deeply curious and listening to others.

Is always learning with and from others.

Is reading context as much as reading content.

Is continuously learning from interacting with the world, almost as if in conversation with the world.

And finally, is willing to reflect on performance, alone and with the help of others.

No one is on this journey alone or only accompanied by the limited number of co-workers she sees everyday.

John Seely Brown

Years before giving this commencement address, Brown used the “one room schoolhouse” in early American education as the springboard for a talk he gave about the type of learning environment we need to meet this “call to arms.” In what he dubbed the One Room Global Schoolhouse, he applied ideas about education from John Dewey and Maria Montessori to the network age. This kind of learning has new characteristics along with some traditional ones.

Learning’s aim both then and now “is making things as well as contexts,” because important information comes from both of them. It is not simply the result (the gadget, service or competence with spelling) that you end up with but also how you got there. He cites blogging as an example, where the blog post is the product but its dissemination creates the context for a conversation with readers. Similarly, in a one-room schoolhouse, a student may achieve his goal but only does so because everyone else who’s with him in the room has helped him. (I’ve been taking this to heart by adapting each week’s newsletter into a blog post so that you can share your comments each week with one another instead of just with me if you want to.)

On the other hand, learning in a localized space that’s open to global connections and boundless knowledge means that it’s better to “play with something until it just falls into place.” It’s not merely the problem you’re trying to solve or the change you’re trying to make but also creating an environment where discovery becomes possible given the volume of inputs and information. This kind of work isn’t arm’s length, but immersive. (I think of finger-painting instead of using a brush.) It allows you to put seemingly unrelated ideas, components or strategies together because it’s fun to do so and–almost incidentally–gives rise to possibilities that you simply didn’t see before. In Global Schoolhouses, “tinkering is catalytic.”

Because “time is money” in the working world, one of the challenges is for leaders, managers, coordinators, and teachers to provide “a space of safety and permission” where you can make playful mistakes until you get it right. Because knowledge is so vast and our connections to others so extensive, linear and circumscribed forms of learning simply can’t harness the tools at our disposal to make the world a better place.

Some of the learning we need must be (for lack of a better word) intergenerational too. Brown is inspired by the one room schoolhouse where the younger kids and the older kids teach one another and where the teacher acts as coach, coordinator and mentor once she’s set the table. In today’s workplace, Brown’s vision gets us imagining less hierarchical orgnizations, workers plotting the directions they’ll follow instead of following a manager’s directions, and constantly seeking input from all of the work’s stakeholders, including owners, suppliers, customers and members of the community where the work is being done. The conversation needs to be between the youngest and the oldest too. For the magic to happen in the learning space where you work, that space should be as open as possible to the knowledge and connections that are outside of it.

In his commencement address, Brown refers to Sherlock Holmes when describing the kind of reasoning that can be developed in learning collaborations like this.

[W]here Holmes breaks new ground is insisting that the facts are never really all there and so, one must engage in abductive reasoning as well. One must ask not only what do I see but what am I not seeing and why? Abduction requires imagination! Not the ‘creative arts’ kind but the kind associated with empathy. What questions would one ask if they imagined themselves in the shoes, or situation of another.

Here’s a video from Brown’s talk on the “Global One Room Schoolhouse.” It is a graphic presentation that covers many of the points above. While I found the word streams snaking across the screen more distracting than illuminating, it is well worth the 10 minutes it will take for you to listen to it.

There’s Cause for White-Water Optimism

We’re worrying about our work for lots of reasons today. Recent news reports have included these troubling stories:

– the gains in gross national product (or wealth) that were reported this week are not being shared with most American workers, which means the costs and benefits of work are increasingly skewed in favor of the few over the many;

– entire categories of work—particularly in mid-level and lower paying jobs—will be eliminated by technologies like advanced robotics and artificial intelligence over the next decade;  and

– the many ways that we’re failing to consider the human impacts of technologies because of the blinding pace of innovation and the rush to monetize new products before we understand the consequences around their use—stories about cell phone and social media addictions, for example.

Brown’s attempt to produce more white-water rafters who can address these kinds of challenges is part of the solution he proposes. Another part is to balance our legitimate concerns about the changes we’re experiencing with optimism and excitement about the possibilities as he sees them.

Brown closes his commencement address with a story about the exciting possibiities of new technology tools. It’s about how Artificial Intelligence (AI) can become Intelligence Augmentation (or IA). “[I]f we can get this right,  he says, ” this could lead to a kind of man/machine virtuosity that actually enhances our humanness rather than the more dystopian view of robots replacing most of us.”

Brown witnessed this shift to “virtuosity” during the now legendary contest that pitted the greatest Go player in the world against AlphaGo, an artificial intelligence program. (Maybe the world’s most complex game, Go has been played in East Asia for more than 2500 years.)

There is a documentary about AlphaGo (trailer here) that I watched last night and that I agree with Brown is “stunning.”  It follows at close range the team that developed the AIphaGo program, the first games the program played and lost, and the final match where AlphaGo beat the world champion in 4 out of 5 games. What Brown found most compelling (and shared with his graduates) were the testimonials and comments at the end.

Those who play the game regularly, like Brown apparently does, found the gameplay they witnessed to be “intuitive and surprising,” even “creative.” Passionate players who watched the human/machine interaction throughout felt it expanded the possibilities and parameters of the game, “a different sense of the internal beauty of the game.” For the world champion himself, it was striking how much it improved his Go play after the epic match. Brown was so excited by these reports that he felt the 21stCentury actually began in 2016 when the championship matches took place. In his mind, it marked the date when humans and machines began to “learn with and from each other.”

Of course, Brown’s AlphaGo story is also about the entrepreneurial learning that produced not only an awe-inspiring product but also a context where literally millions had input in the lessons that were being learned along the way.

+ + +

The past year’s worth of newsletter stories have considered many of the observations that Brown makes above. If you’re interested, there are links to all published newsletters on the Subscribe Page. Here’s a partial list of topics that relate to today’s discussion:

– how technology influences the future of our work (9/13/17-why “small” inventions like barbed wire, modern paper and the sensors in our phones can be more influential than “big” ones like the smart phone itself; 10/1/17-how blockchain could monetize every job, big and small, where you have something of value that others want);

– how openness to “the new and unexplored” is key to survival in work and in life (8/20/17–working groups outside your discipline are better at “scaling up” learning in rapidly changing industries; 6/24/18–a genetic marker for extreme explorers has been found among the first settlers of the Western Hemisphere); and

– the value of playful tinkering (7/2/17 -if you really want to learn, focusing less may allow you to see more); 8/27/17–how curiosity without formal preparation can win you a Nobel Prize in physics; and 10/17/17–the one skill you’ll need in the future according to the World Economic Forum is the ability to play creatively).

What John Seely Brown does in his June commencement address is to link these ideas (and others) into a narrative that’s filled with his own excitement and optimism. In my experience, the commencement address season is a particularly good time to find his kind of inspiration.

Filed Under: *All Posts, Being Part of Something Bigger than Yourself, Continuous Learning, Entrepreneurship Tagged With: Ai, AlphaGo, connectedness, connection, entrepreneurial learning, IA, innovation, John Seely Brown, learning, playful work, technology, tinker, too big to know, tools, transformational work, whitewater world

About David

David Griesing (@worklifeward) writes from Philadelphia.

Read More →

David Griesing Twitter @worklifereward

Subscribe to my Newsletter

Join all the others who have new posts, recommendations and links to explore delivered to their inboxes every week. You can read all published newsletters via the Index on the Subscribe Page.

My Forthcoming Book

WordLifeReward Book

Writings

  • *All Posts (207)
  • Being Part of Something Bigger than Yourself (101)
  • Being Proud of Your Work (32)
  • Building Your Values into Your Work (82)
  • Continuous Learning (70)
  • Daily Preparation (50)
  • Entrepreneurship (29)
  • Heroes & Other Role Models (39)
  • Introducing Yourself & Your Work (22)
  • The Op-eds (4)
  • Using Humor Effectively (13)
  • Work & Life Rewards (70)

Archives

Search this Site

Follow Me

David Griesing Twitter @worklifereward

Recent Posts

  • Too Many Whose Jobs Aim To Hold Us Together Are Falling Apart August 19, 2022
  • Turning on the Rescuers July 25, 2022
  • Who We Go-to To Learn How to Get There July 5, 2022
  • A Deeper Sense of Place is Like an Anchor in Turbulent Times June 13, 2022
  • Divided We Fall May 29, 2022

Navigate

  • About
    • Biography
    • Teaching and Training
  • Blog
  • Book
    • WorkLifeReward
  • Contact
  • Privacy Policy
  • Subscribe to my Newsletter
  • Terms of Use

Copyright © 2022 David Griesing. All Rights Reserved.

  • Terms of Use
  • Privacy Policy