“People want to see more local news,” Mark Zuckerberg explained today in his announcement of Facebook’s latest change to their algorithms. If a Facebook friend shares a local story on their timeline, it’s now more likely to show up on your timeline. Facebook is doing this, Zuckerberg explains, because local news helps people to understand the issues in our communities, and knowing more helps people to get engaged in addressing those issues. In other words, according to Facebook, the corporation is once again seeking to give us what we want, and moreover, according to Zuckerberg, this is going to be good for our society and probably even for our democracy. But this announcement leaves at least two important questions unanswered.

First, what will happen once Facebook and my local news sources find that I’m less likely to click on a story about the mayor’s budget report than on a story about my neighbor’s cute cats? Well, here’s what could happen. Local news will need to maximize their reach on Facebook, so they will offer us more clickbait: more local cats, all the time.

Second: who gave Facebook the authority to decide what local news we’ll see in our feeds, anyway? You could argue that we grant them that authority every time we log on to the site. And Facebook itself has said that they’re leaving it up to us to decide which sources are trustworthy or not. But I don’t think you or I ever consciously decided to grant Facebook the right to make decisions about our news landscape. We didn’t elect officers to Facebook, or set up a commission to decide which algorithms will best serve the public interest, or ask our government officials to appoint a board that could hold Facebook accountable if they don’t deliver news that will improve the quality of our communities.

Facebook didn’t set out to be a news distributor, let alone a vetter of news. Zuckerberg has repeatedly denied that Facebook is a media corporation or a news outlet. But when 67% of U.S. adults say that we get most of our news from Facebook, it’s time for us to deal with the fact that Facebook is, indeed, a major news distribution system. We may not have granted them the authority to make decisions about our news landscape but they are in a position to exercise that authority. Facebook has effectively replaced the distribution system of local news that preceded it. And as ad money that once supported local news now goes to Google and Facebook, who shift the cost of content creation elsewhere while reaping the benefits that come with distributing that content, we shift even more authority to Facebook to shape our news ecosystem.

This isn’t the first time that Facebook has adjusted its algorithms in a way that affects the news you see. Last October, Facebook conducted an experiment in six countries, moving publishers’ posts to a secondary feed and leaving you with more content from friends.  Overnight, engagement with Slovakia’s media Facebook pages fell by 60%. What’s more, we know that Facebook’s data science team, when faced with evidence that the news feed curation algorithm modestly accelerates the polarization that makes it less likely for us to encounter diverse news sources, framed its findings as what Christian Sandvig has termed the “it’s not our fault” study. So we can guess what they’ll say about what happens when our access to quality local news is negatively affected by their algorithms.

Maybe our expectations regarding our social media are finally changing. According to a report on CBS News, the 2018 Edelman Trust Barometer survey found that trust in social media platforms is down, but trust in journalism has risen. Maybe regulators in Brussels and London will figure out a way to hold Facebook accountable for its role in undermining national elections, as some are hoping.  Following a suggestion from the Center for Economic and Policy Research’s Dean Baker, we could find ways to fine Facebook and require them to notify all affected users who might have seen unverified news. Or maybe we in the U.S. will finally have the political will to demand that our platforms become more transparent and governed by public rather than private entities. If not, I have some great photos of cats that I know one of my local news organizations would love for you to see.







Originally published on Platypus, the CASTAC blog, January 9, 2018

Platypus Editor’s note: This is a jointly-authored post by Lynn Schofield Clark, Professor and Chair of the Department of Media, Film and Journalism Studies at the University of Denver, and Adrienne Russell, Mary Laird Wood Professor in the Department of Communication at the University of Washington. 

The Federal Communications Commission vote to end net neutrality generated weeks of stories last month — good stories — and the topic will fuel many more good stories in the months and year to come. Those stories at the intersection where technology, policy, politics and ideology meet are testament in large part to the way savvy activist communities have framed the story of net neutrality and pushed it into the news cycle. Activist-experts have made net neutrality news stories easy to write. They have articulated why internet regulatory policy should matter to the public, how it affects creative and entrepreneurial endeavor, how it has fueled but could also hobble the kind of digital innovation that has shaped daily life for hundreds of millions of Americans.

We haven’t enjoyed the same kind of coverage on the rise of “fake news,” a similarly complex story. “Fake news” is a digital-age phenomenon, a rhetorical device, a business story, a political scourge, a foreign policy threat, and more. It is as juicy a story as it is complex, and yet the mainstream media has failed to fully take it up — and, without help, the mainstream media never will fully take it up.


A day before Thanksgiving, Facebook issued an update on its plan to increase transparency about the advertisements that appear on its massively influential media platform. The news came in response to a request Congress also put to Google and Twitter following a hearing on foreign interference in the 2016 U.S. presidential election. “It is important that people understand how foreign actors tried to sow division and mistrust using Facebook,” an unnamed author wrote on behalf of the company.

How important was it to Facebook? It will come as no surprise that the unsigned Facebook new release was buried under Holiday distractions. The Associated Press and Politico published quick pieces that highlighted the company’s plan to create a portal where users could see for themselves which of the pages they “liked” or followed last year were produced by the Internet Research Agency, a now infamous and defunct Kremlin propaganda farm. TechCrunch, a leading technology news blog, also covered the story, criticizing Facebook for not going far enough in the cause of transparency. Four days later, Slate republished a story from an online forum for national security law and policy that described Facebook’s effort as a “pathetic fig leaf.”

Indeed, The Daily Beast reported that, during the special U.S. Senate election in Alabama last month, the same kind of “shadowy, undisclosed advertising content” that last year hyped the Trump candidacy poured into the Facebook accounts of Alabama residents. The ads this time promoted failed candidate Roy Moore, the longtime controversial far-right judge who has been credibly accused of pedophilia.

Facebook noted in response that it plans to roll out its transparency updates this summer ahead of the U.S. midterm elections.

The “fake news” storyline tied to social media advertising will extend for many more months and perhaps years to come. Meantime, many other fake news developments will continue to unfold every day. Will mainstream reporting on the topic move beyond description and reaction and instead focus coverage onto the larger forces and entrenched systems fueling fake news?

A dark assessment and a sinister twist

For better or worse, the fake news phenomena — and the responses the phenomena engender — will help shape the future of communication. It is of course a much bigger story than the president, for one, would have us believe. The term itself is frustrating. It’s a contradiction, as UNESCO’s Guy Berger hasnoted. “If it’s news, then it isn’t fake, and if it is false, then it can’t be news,” as he put it. The term is now overloaded. It acts as a stand-in for many complex features of the current information ecosystem — the digital space as a global information war zone, the effects of iterative, arcane and enormously influential media technologies, the rise of empowered but also media-illiterate publics, the booming news market, and its shadow, the failing journalism industry, to name just a few.

In all its permutations, fake news is the air we breathe as members of what University of Pennsylvania media studies Prof. Victor Pickard called the “misinformation society.” In an essay for the Trump-era Big Picture symposium being held by Public Books, Pickard names three factors contributing to the unfolding information crisis: anemic funding for accountability journalism; the growing dominance of infrastructure and platforms that through their business models support the spread of misinformation; and the “regulatory capture” metastasizing across governments, where agencies are staffed with regulators who support and often benefit from the commercial interests they are tasked to oversee. Those factors among others have combined to create a media landscape that seems especially vulnerable to abuse, where hucksters thrive and bogus research and disinformation land upon vast fields of fertile soil.

Many will agree with Pickard’s dark assessment. We would add a sinister twist. The information crisis he outlines is a story that some of the most popular forms of journalism today — specifically television news and print and online outlets that adhere to legacy practices — seem particularly poorly equipped to tell. Too often, coverage on the fake news information crisis fails to shed light on key issues — on the values embedded in new tools and infrastructures, for example, on the way the information society as it is being built exacerbates social and economic vulnerabilities, on how we might take action to make things better and, perhaps most significantly, on the tension between the interests of corporate media and the interests of the public that journalism is supposed to serve. The struggle among news outlets to powerfully report on such topics will come as no surprise to anyone who has been paying attention to changes in the news industry over the past two decades.

As many have noted, journalism today is being redesigned on the fly to suit distribution systems created by the digital-tech entrepreneurs behind Google, Apple, Facebook and Amazon. Nearly four decades into the digital revolution, journalism business models are still struggling to find stable footing. Local journalism continues to wither, billionaires gobble up larger shares of the market, and corporate consolidation continues apace. The fact is, the sad state of affairs today only stokes the tension long at the center of the journalism enterprise in the United States between what’s good for business, and what’s good for the public. There are plenty of examples of how that tension has shaped journalism reporting and publishing over the decades, but the way it has shaped news-cycle reporting and the selection of events and people deemed newsworthy is particularly relevant to the quality of coverage we’re likely to see on the information crisis.

Journalism values

Over the course of decades, researchers have explored ways certain values have shaped the procedures and structures of journalism. Indeed, the way the Fourth Estate operates is now so familiar that gamesmanship is factored into the operation. Communication shops have long worked a sort of see-saw relationship with reporters. Corporate and government spokespeople attempt to “manage” news cycles by in part pushing some stories and burying others. They know that reporters — now perhaps more than ever before — prize immediacy and that news outlets privilege stories that are timely. That’s why Facebook posted its transparency release the day before Thanksgiving and succeeded in garnering mostly short pieces published on a day when relatively few people would read them.

Reporters tend also to play up conflict and highlight individual actors. Thus, as Stanford University Political Science Professor Shanto Iyengar has shown, they cover social issues as a series of episodes, where a particular instance of conflict stands in for the larger problem of, for example, systemic racial bias in the criminal justice system. The episodic approach tends to encourage attribution of both the problem and the resolution to the subjects who appear in the reported stories. The effect is that complex issues tied to technology developments and media regulation and public policy formulation, for example, appear either in the breathless tone of the unfolding episode or in a style that focuses on key players, often at the expense of the bigger picture. These issues, combined with journalism’s tendency to privilege bureaucratically credible sources, in this case tech industry leaders and politicians, creates coverage that papers over enduring problems and works to downplay the power of larger societal conditions.

These weaknesses are exacerbated by the fact that journalists also tend to cover stories about journalism uncritically. Outlets readily acknowledge that huge data sets describing reader and viewer habits are transforming news rooms, but they tend to focus on the ways the data makes it easier for them to engage with their audiences and come to editorial decisions. Individual writers and editors certainly grouse about having to produce click-bait stories, but few outlets have constructed a beat around the way user data and related technologies may be working to expand territory falling into the misinformation zone. There is a blindness tied to the way increasing reliance on user preference data has come to structure the overall industry and its business models. In other words, data tracking and analysis has become a tool for journalism’s survival within a capitalist support system, and questioning the values behind this new profit model rarely appears on the publishing agenda — not least because U.S.-based journalism has long treated capitalism itself as mere common sense.

The professional and business-related biases in favor of timeliness, on episodic story-telling, on spotlighting individuals who occupy positions of authority, on delivering short and upbeat stories on technology issues — all of these practices and inclinations combine to steer journalism away from larger questions of how technology and news industry business models are mixing in the information ecosystem to shape new technologies and values and effect larger social and political realities.

Technology values

There are values embedded in technology. This isn’t something widely understood, partly because journalism often uncritically relies on the perspectives of tech-industry leaders, and tech entrepreneurs and developers aren’t being paid to consider the way what they’re building influences how we think and act.  A growing body of work on values in design supports the premise that communication technology is no neutral medium but that it powerfully asserts social, political, and moral values. Certain design elements enable or restrict how technologies may be used and by whom. Design can cater to male or female users or straight or queer users or rich or poor users. It can lead us believe some data is important and other data irrelevant, and so on.

The net neutrality debate has underlined this quality. The internet as it has been designed intentionally so far is a so-called dumb network, because its transmission control protocol (TCP) makes no decisions about content — every IP address is treated equally on the network; no IP address receives preferred treatment. There results of that design have been dramatic. For three decades we have enjoyed at the same quality and speed sites created by happy cat people on a limited budget and those created by multi-billion dollar companies like Wal-Mart and CNN. The protocol fosters innovation because creating a new application does not require anyone to change the network. The World Wide Web runs atop the internet as it was made, so do voice and video files and so do social media services. None of the inventors of these technologies had to modify internet protocol. By contrast, the phone system is a “smart network,” whose services depend on the central switches owned by the phone company. There is little room for innovation on the phone network. “Sure, you can make a phone look like a cheeseburger or a banana, but you can’t change the services it offers… centralized innovation means slow innovation,” wrote technologist Andreas M. Antonopoulos. “It also means innovation directed by the goals of a single company. As a result, anything that doesn’t seem to fit the vision of the company that owns the network is rejected or even actively fought.”

Have news media translated that view for use in the debate over fake news? Mark Zuckerberg and Sheryl Sandberg consistently claim that Facebook is a technology company, not a media company, and in doing so, they lean on the impression that technology is a neutral medium. They sidestep the responsibility that would come from acknowledging that there exists a variety of content that the Facebook platform prefers to generate and circulate. From that standpoint, “Facebook can claim that any perceived errors in Trending Topics or News Feed products are the result of algorithms that need tweaking, artificial intelligence that needs more training data, or reflections of users,” as University of Southern California Communication Professor Mike Ananny put it “Facebook [can] claim that it is not taking any editorial position.” Indeed, Facebook, Twitter and Google have moved between blaming human and algorithmic error for various public debacles. Twitter blamed human error when a New York Times account was blocked on November 26. The same when President Trump’s account was blocked for 11 minutes earlier in the month. But it claimed an algorithm was at fault when a hashtag generated in the aftermath of the Las Vegas shootings had misspelled the name of the city. Facebook blamed humans for bias in its trending topics then blamed algorithms for allowing advertisers to market products to “jew haters.” The back-and-forth blame game disorients people following the “fake news” story and obscures the fact that algorithms, like human employees, act according to the values cooked into their innards. Humans with values make humans who think like they do, for good and bad. Same for machines and for software and algorithms.

Properly informed members of the public might more easily recognize that effective algorithm-driven machine learning requires enormous amounts of user data and that, for that reason, the hungry algorithms result in aggressive surveillance of users. This is true beyond social media. User surveillance is now driving the business models of companies across industries, including the journalism industry, such as it is. Rather than relying on ad buys, media companies sell user data. Is it a surprise that, as our research demonstrates, news organizations have not done a particularly good job covering issues related to surveillance and receding privacy norms or, in other words, surfacing the downsides of an economy that sees personal data as the new oil — an immensely valuable asset, the extraction of which comes with enormous downsides?

Tech company spokespeople simply are not the best sources to consult on the wider implications of algorithm design. Frank Pasquale, Professor of Law at the University of Maryland and author of The Black Box Society, has argued that algorithmic accountability is “a big tent project,” that “requires the skills of theorists and practitioners, lawyers, social scientists, journalists and others.”  Pasquale’s recent testimony before the U.S. House of Representatives combined the expertise of all of these fields to identify examples of troubling collection, bad or biased analysis, and discriminatory uses of algorithm-based technology — all of which work to exacerbate existing social and economic vulnerabilities. To cover the information crisis, journalists must deepen their sources on technology.

Exacerbated vulnerabilities

Just as the case with values in design, episodic coverage that emphasizes the perspectives of Silicon Valley executives does not tend to address issues of inequality. Revelatory incidents tend to be reported in isolation, which masks the way they are structural in nature. If the FCC rollback of net neutrality goes unsuccessfully challenged, people who can’t afford to pay for faster internet speeds will find themselves at an even greater disadvantage than they are experiencing today. They may be priced out of the innovation game, blocked from entering the startup society. Digital-age free expression and activist organizing may also be limited. Carmen Scurato, director of policy and legal affairs for the National Hispanic Media Coalition put it this way: “Dismantling net neutrality opens the door for corporations to … impose a new tool to access information online. […] For Latinos and other people of color, who have long been misrepresented or underrepresented by traditional media outlets, an open internet is the primary destination for our communities to share our stories in our own words — without being blocked by powerful gatekeepers.”

There is little news coverage of how algorithms and other forms of artificial intelligence can be discriminatory, especially to groups already vulnerable. While investigative journalists at outlets like Propublica and the New York Times have exposed such discrimination, it is easy to miss coverage of, for instance, the waycredit card companies collect data on the mental health to predict future financial distress, which means a visit to a marriage counselor might translate as a downgraded credit rating; the way Facebook algorithms programed to deliver housing ads screen out people of color; the way Google algorithms deliver ads for services like background checks for criminal records alongside searches that include any typically African-American name; the way prices for products or enticements to buy change depending on whether you make a purchase from a low-income zip codes.

The same kind of user data fuels misinformation campaigns. Algorithms have been used to tap into people’s intellectual and psychological vulnerabilities. Online evidence of affiliation with racist groups or resentment toward certain ethnic groups signal a user will be receptive to made-up stories that fuel such bias. The same stories would be directed away from audiences likely to call it out as false. Young people of color demonstrate heightened awareness about the ways data can be used against them, according to our research. This is perhaps what makes them as a general group the most savvy but also the most self-censoring users of digital media. It is clear that increased circulation of hate speech pushes already marginalized groups further to the margins. The well documented lack of diversity among tech creators, journalists and social media influencers threatens to reinforce viewpoints and experiences of those in positions of privilege at the expense of everyone else.

An expanded journalism

There remains a world of information beyond news, real and fake. There is a growing movement to hold the task-masters of artificial intelligence accountable. Research collectives like NYU’s AINow, MIT’s Data for Black Lives, and Cardiff University’s Data Justice Lab, and the cross-institutional Public Data Lab, all do work on the real-world implications of algorithms and related technologies. A report by AINow highlights cases where these systems are being used without prior tested or assessed for bias and inaccuracy. The report calls for an end to so-called black box predictive systems at public institutions — such as  the criminal justice and education systems.  A report by Public Data Lab provides a useful guide to tracing the production, circulation and reception of “fake news” online.

Sites like The Conversation, Public Books, and Medium, and the opinion pages of newspapers like the New York Times and the Guardian host essays by scholars studying dubious data production and AI practices. Jenny Stromer-Galley, Professor and Director for the Center for Computational and Data Sciences at Syracuse University, wrote on simple ways Facebook could reduce fake news; Siva Vaidhyanathan on how Facebook benefits from fake news;  Zeynep Tufekci on Zuckerberg’s Preposterous Defense of Facebook after the election.

The point is, academic and journalistic investigations, policy recommendations, informed criticism and thoughtful commentary on the information crisis are rumbling under the cable news noise. Mainstream journalists have to spend too much energy working a system littered with obstacles that keep them from meeting the challenges posed by the information crisis. The task is falling to outside thinkers, to academics, activists, technologists and others to shed light on the true contours and nature of the fake news story.


By Regina Marchi

We hear a lot about fake news today, but sadly the publication of false news stories for political or economic profit has been around for centuries, often resulting in devastating consequences.  From as early as the 1400s in Italy and other parts of Europe, fake news stories claiming that Jewish people murdered Christian babies and drank their blood were published to incite and justify anti-Semitism. This fake news phenomenon reached frenzied proportions in post WWI Germany, leading to the Holocaust.

Here in the US, newspaper industry mogul William Randolph Hearst published an onslaught of fake news stories about Spain in the 1890s to grow public support and justification for the Spanish-American War. Hearst recognized that a war would sell lots of his newspapers and would jettison him to national prominence. (For more on this, see the PBS documentary Crucible of Empire).

Though he did not have a correspondent on the ground in Cuba as the Cuban rebellion against Spain heated up in 1895, Hearst shamelessly attached Havana datelines to bylined stories manufactured in New York. Today, Hearst’s fake news stories which, among other things, blamed the Spanish government for the February 15, 1898 sinking of the USS Maine without any evidence, are widely acknowledged to have caused the Spanish American War, considered to be the first “media war.”

In the early 1950s, fake news stories accusing Guatemala’s democratically elected president, Jacobo Árbenz, of being a Communist were published in major US newspapers at the behest of the Boston-based United Fruit Company. Operating banana plantations on the majority of Guatemala’s arable land at the time, United Fruit’s top leaders (who had powerful friends in the Eisenhower Administration and among US newspaper publishers) wanted to end President Arbenz’s plans for agrarian reform. False news stories about Arbenz being a “Communist” convinced the US Congress and the public to support a violent CIA-led military coup d’état in Guatemala in 1954, which ousted President Arbenz and ushered in decades of brutal military regimes from which Guatemala has never politically recovered. (For more on this, see Bitter Fruit: The Untold Story of the American Coup in Guatemala, by Stephen Schlesinger and Stephen Kinzer.)

In a less deadly example of fake news, Orsen Welles’ 1938 radio theater broadcast  “War of the Worlds” was subsequently reported by many newspapers to have caused numerous heart attacks and suicide attempts by listeners who were allegedly terrified of an alien invasion (these claims were later found to be mostly false).  As Adrian Chen notes in his New Yorker article “The Fake News Fallacy” (Sept. 4, 2017), newspapers exaggerated the War of the Worlds panic in an attempt to discredit the pubescent medium of radio, which was becoming the dominant source of breaking news in the 1930s. Chen writes: “Newspapers wanted to show that radio was irresponsible and needed guidance from its older, more respectable siblings in the print media, such “guidance” mostly taking the form of lucrative licensing deals and increased ownership of local radio stations.”

Today, fake news proliferates on the Internet, with social media being deployed as weapons of information warfare. In the weeks after the 2016 presidential election, Facebook and Twitter faced widespread criticism for their failure to halt the spread of fake news about Hillary Clinton on their platforms. The problem was not simply that people were able to spread lies but that the digital platforms were designed in ways that made lies especially transmissible, spreading faster than fact checkers could debunk them.

Supposedly “neutral” social media platforms use personalized algorithms to feed users news based on past personal preferences. But this makes it less likely that people will see news that contradicts their worldviews. Today, there is a growing sense among observers that the championing of openness by Facebook and other social media platforms is at odds with the public interest, something we write about in Young People and the Future of News and in the book chapter “Storytelling the Self into Citizenship: How Social Media Practices Facilitate Adolescent and Emerging Adult Political Life,” forthcoming in the edited volume A Networked Self: Birth, Life, Death (Ed. Zizi Papacharissi).

Cambridge University Press has released the book! Here is a news release:



Lynn Schofield Clark



Young people trust friends on social media over news organizations, according to new book

Young People and the Future of News finds that in a time of declining news industries, young people are informing one another through social media—for better or worse

DENVER, September 30, 2017 — With social media, young people are finding out about news events from friends. When they are outraged or drawn into what is happening in current events, they are not only reading and viewing, but also sharing, immersing themselves in, and sometimes even creating news. And, it is changing the way young people define news. That’s according to the authors of Young People and the Future of News Lynn Schofield Clark and Regina Marchi, two journalism professors who studied diverse U.S. young people and their news habits for 10 years.

“Rather than relying on news organizations to tell them what is newsworthy, they’re deciding for themselves—and usually that decision is influenced not by where the news came from, but who told them about it,” says Clark, professor and chair in the media, film and journalism studies department at the University of Denver.

Facebook, Instagram, and YouTube have skyrocketed as locations for news among young people during the past decade, according to numerous national and international studies. Meanwhile, local newspapers and television news have been laying off staff—and struggling to survive.

The shift from trusted journalistic sources to social media as young people’s main source of news isn’t good news for the organizations of traditional journalism, the book acknowledges.  But members of immigrant communities and communities of color have long felt unheard by and underrepresented within those organizations, according to those interviewed in the book. The book also traces ways that young people in these communities are sharing the news that matters to them outside of formal news channels and away from public scrutiny.

“We used to think of newspapers as playing a central role in informing people of the news needed to participate in a democracy,” Clark says. “But what we found is that now young people are socializing one another into politics. This happens as they draw on the connective capacities of social media as well as on practices we once associated with journalism, like holding people in power accountable.”

“This means that we will need to rethink journalism industries as well as the ways that we educate people both for journalism and for democratic engagement more generally,” says Marchi, professor in the Department of Journalism and Media Studies at the Rutgers University. “Across the country, funding for high school journalism clubs and civics classes has been slashed, but students need opportunities to explore the relationship between journalism and democracy now more than ever,” she says.

Clark and Marchi interviewed more than 200 young people and several dozen parents, educators, and other adults working with youth in four U.S. urban areas and compared their findings with national studies on related topics from several research centers. They also considered the platforms that are delivering the news to most young people.

“News is different today,” Marchi said. “And our news distributors—namely Facebook, YouTube and Instagram—need to be held accountable for shaping what’s available and for helping young people and all of us to evaluate it.” The book offers a number of suggestions on this.

The authors praise recent efforts to alert communities to untruths circulating in social media, such as Facebook’s “disputed” tag that indicates that snopes.net and Politifact have vetted certain information and found it questionable. But they also note that the public needs greater transparency from the social media platforms to counteract the tendency for such platforms to make decisions that benefit them commercially but that may be harmful to public interests. Young people need greater awareness both of the ways that the algorithms of social media platforms work, and of how to be actively involved in shaping the emerging media and political environment, they argue.

“Young people don’t consume news the way adults did (or do), but this doesn’t mean that they’re uninterested in what’s going on,” Clark says. “In fact, research is suggesting that overall, this is a very engaged generation.” The book’s findings confound prior theories that presumed that casual online sharing by youth might be dismissed as mere ‘slactivism.’   “In contrast,” says Marchi, “we suggest that such casual sharings and even observation-only participation can serve an important role as a form of early civic engagement.”

Young People and the Future of News: Social Media and the Rise of Connective Journalism is set for publication on September 15 by Cambridge University Press. For more information, visit https://www.cambridge.org/core/books/young-people-and-the-future-of-news/E73A053188B9C194ADF02FEEA8F94574

About the Authors

Bio of Lynn Schofield Clark

Clark is Professor and Chair of the Department of Media, Film and Journalism Studies and Director of the Estlow Center for Journalism and New Media at the University of Denver, In addition to Young People and the Future of News, she is author of The Parent App: Understanding Families in a Digital Age (Oxford UP, 2013), and coauthor of Media, Home and Family (Routledge 2004). Her first book, From Angels to Aliens: Teenagers, the Media, and the Supernatural (Oxford UP, 2005), was named Best Scholarly Book by the Ethnography division of the National Communication Association. She teaches courses on diversity in news and media studies.

Bio of Regina Marchi

Marchi is Associate Professor in the Department of Journalism and Media Studies at the Rutgers University. Her first book, Day of the Dead in the USA: The Migration and Transformation of a Cultural Phenomenon, won the 2010 James W. Carey Award for Media Research and an International Latino Book Award in the category of “Best history/political book.”


A new study from the Pew Research Center’s Journalism & Media group finds that 74% of people of color get news on social media sites, up from 64% in 2016.

The study also finds that 67% of US adults get news via social media, and for the first time, more than half of Americans over age 50 (55%) say that they get their news on social media sites.

Facebook dominates as the site for news: about 45% of all in the US get news there. 18% now get news from YouTube.

A quick shout-out to those journalists who are placing themselves in harm’s way to cover the devastation in Puerto Rico, as Frances Robles and colleagues at the New York Times have done, and to those who earlier covered the evacuation of the western coast of Florida, as Tampa’s Fox 13 journalists did, and as the Houston Chronicle ‘s journalists continue to cover the devastation in and around south Texas and Louisiana. Journalists are creating helpful resources such as CNN’s stormtrackers and the Orlando Sentinel’s infographic that describes what a hurricane is. And then of course there are those journalists who are doggedly following the daily machinations of the Trump administration at The Washington Post, those finding innovative ways to use data at USA Today to reveal how power and influence operate on Trump’s golf courses, and those who are fact-checking rumors at the New York Times, among other places. There are also interesting innovations such as Buzzfeed’s Outside Your Bubble feature that directs your attention to the ways that various audiences across the internet are responding to the same stories. Let’s hope that stories like these both inform and encourage all of us to act as we can to address overwhelming problems and call our leaders to account.


It’s hard to talk about journalism’s positive contributions today without recognizing the imbalances that are shaping our experiences of news. What do we do about the fact that there are large swaths of the U.S. population who seem to live in a universe where facts, and even fact-checking, is met with resistance because those facts contradict their worldview, causing cognitive dissonance and a backfire effect? What about the reality of a hybrid news system in which there are so many purported sources for news today that the young, and indeed many of all ages, have difficulty separating fact from fiction, as one study by the Stanford Education group suggests? Today it’s easier than ever for us to practice what social scientists term selective exposure. And after all, we engage in selective exposure because, as anthropologist Pascal Boyer is quoted as explaining in Julie Beck’s article on facts in The Atlantic, “Having social support, from an evolutionary standpoint, is far more important than knowing the truth about some facts that do not directly impinge on your life.” This means that we are predisposed to seek out facts that support the worldview that we hold and that those most important to us also hold – and it’s a threat to our very identity to try to deal with facts that challenge that worldview.

All of this leaves me with more questions than answers when it comes to the topic of how today’s young people are participating in changing the definition of “news” through their uses of social media. Yes, it’s a problem that they aren’t relying on traditional sources of evidence and verification and are instead tending to trust what their friends say rather than what is said by (fill in the blank with your own preferred source of news, recognizing that adults don’t agree on this, either). But for me, it’s important to begin with the recognition that this reliance on friends and those we trust for information verification is not a new situation, since social scientists say that as humans we’ve always been inclined to seek confirmation for the way we see the world.

The implications, then, are twofold: first, we need to help to create more support for cultures that value an evidence-based reality. This entails teaching young people how to evaluate information as well as how to grapple with ambiguities, such as when facts contradict their world views. Because dealing with contrary information can threaten deeply-held identities, it involves doing the kind of teaching and dialoguing that is sensitive and empathic and seeks to widen circles of identification rather than harshly reinforcing identity barriers. Also, based on my experiences in working with young people, such conversations involve trying to link new information with what young people know to be true from their own life experiences – especially when that lived knowledge itself is related to injustices that they and their family members and friends know about first hand. We as educators and leaders have to learn to find points of agreement and commonality so that we can support some aspects of what another person takes as truth even as we may also present information that challenges. But we have to do this while also recognizing and acknowledging former and present shared pain.

The second implication is related to the first, and it has to begin with a mind-bender: part of the reason that we think that there is so much support for an “alternative facts” universe is that it makes for a great news story. So many of us are craving information about what is going on that journalists and commentators are obliging our curiosity – as they should – by researching the topics of fake news and alternative facts. Some of this is highly entertaining, like The Guardian’s Top Ten Alternative Facts for a Post-Truth World that highlight films and books such as 1984 and Wag the Dog that seemed to presage today’s strangeness. But all of this attention feeds the sense that this is a large and influential trend that says something about what people actually believe.

Thus, the second implication actually grows out of the slim margin of good news that’s related to journalism. The fact that we are experiencing a dramatic resurgence of interest in and support for high quality journalism, as well as surging interest in emergent voices that are speaking from evidence of their lived realities, is evidence that many, and probably even the majority of people in the U.S., prefer an evidence-based reality to a reality that’s manufactured in support of a worldview. This then reframes the question of the future of news from one focused on human failings to one of system failure. How is it that our systems are making it so that those of the minority are threatening those in the majority?

And this is where we have to turn to questioning things like the rise of Facebook as a news distributor and propagator of fake news, as well as an electoral college system and a media system that inescapably favors coverage of the latest new political crisis instead of coverage on boring topics like laws and policies that for decades have been quietly yet effectively restricting voters from exercising their right to vote, or on how various groups and people are working outside of the spotlight to address those discrepancies in access to democracy. And those are topics for future blogs.