The real story behind the FCC’s study of newsrooms

(This opinion piece by Lewis Friedland, Vilas Distinguished Achievement Professor at the University of Wisconsin–Madison, originally appeared in the Washington Post on Friday, February 28, 2014.  We repost it in full here with the permission of the author.)

Sometimes research takes on a life of its own and becomes more like a Rorschach test for a national policy controversy. That’s what’s happened to a review of the literature on the critical information needs of American communities that I and colleagues from around the country conducted for the Federal Communications Commission in July 2012. The recommendations of the review informed a proposed pilot study in Columbia, S.C., of what, if any, critical information needs citizens have and whether they are being met in our rapidly changing media environment.

To conservative media from Fox News to Rush Limbaugh, this was an attempt to reintroduce the now-lapsed Fairness Doctrine and for President Obama to take control of America’s newsrooms. Other former journalists and media critics apparently agreed. Still others took a more nuanced view – that this may not have been a government plot, but that it would be a waste of money, because either we already know what these needs are, or, there aren’t any, or if there are, we can’t know what they are.

In the end, the underlying theme was: we already know the answers. Americans either have no needs or none that the market is not meeting or can’t meet. Don’t do research. Don’t ask these questions.

Almost all of the critics (save one) didn’t appear to have actually read the original review or proposed study. The FCC called for the literature review because in a rapidly changing information environment it wanted (and was mandated to) understand whether Americans have critical information needs, if so what are they, and how would policymakers and the public know whether they are being met.

As most Monkey Cage readers know, the literature review is one of the most basic procedures in the social sciences. If you want to understand a problem (or even whether there is a problem), you gather all of the existing evidence, review it, identify the most important issues, and then, if warranted, suggest further research. And that’s what we did. We identified about 1,000 peer-reviewed articles in political science, communication, economics, sociology, urban studies, health, education and other fields that might bear on the concept of critical information, winnowed these to 500, reviewed each and reported on them. Our review built on previous studies conducted by the FCC and the Knight Commission on the information needs of communities.

We also outlined a plan for additional research, including studies of whole communities to see whether the needs we found in the areas of risk communication, health, education, the environment, economic development, civic, and local political information were being met or not. Our report was presented and peer-reviewed by scholars at the FCC in July 2012. The full review and bibliography was published on the FCC web site for anyone to see. The FCC then funded Social Solutions Inc. to pull together scholars from multiple disciplines to discuss how to conduct a limited study of whether critical information needs were being met in local media ecologies. (I participated in that research design meeting). SSI proposed a research design based upon that meeting and the criteria set out by the FCC. Largely because of limited funding, the FCC reduced that research to a small pilot study in Columbia, S.C., to see if such a study was viable.

The proposed pilot had three parts. To find out whether community information needs did exist and to what degree, surveys, interviews and focus groups would be carried out, drawn from a broad cross-section of the public. A content analysis of newspapers, broadcast, and Internet outlets would determine whether the information being provided matched people’s expressed needs and how well.  Finally, a third component would conduct a “media market census” to “determine whether and how FCC-regulated and related media construct news and public affairs to determine” critical information needs. One aspect of this was a voluntary questionnaire to newsroom decision-makers about their own perceptions of those needs.

This last component became the spark that set off the firestorm. When the National Association of Broadcasters came out in opposition to the proposed pilot test, they focused on the voluntary questions of newsroom decision-makers. Republican members of the House of Representatives used much of the same language as the NAB in writing to the FCC, and much of this was repeated by FCC Commissioner Ajit Pai (a Republican appointed by President Obama) in his editorial in the Wall Street Journal.

Examining the relation of news standards to news content is a staple of communication research, going back at least  60 years. There have been dozens if not hundreds of studies since then. That said, it was probably a mistake to include one in this study, only because FCC sponsorship could (and might) raise the appearance of a possible conflict. Accordingly, the FCC recently dropped that portion of the study while deciding how to proceed. This was a good and responsible decision, because it clears away the red herring of government control of newsrooms and allows us to focus on the real question: whether the information needs of Americans are being met.

In much of the 20th century, Americans received the information that they depended on through newspapers. The decline of newspapers as economic institutions is now a truism. But whether the information they provided is no longer needed, or is being provided by some alternative source (usually asserted to be the Internet) is not clear. In a 2011 quantitative study of local news provision in 100 markets Matthew Hindman found that there is only a trickle of local news on the Web, and most of this is simply repackaged from newspapers or broadcast. He concluded that while there may be some consumer substitution between online and traditional news sources for national or commodity news, this is not true for local news (an error made by Joe Uscinski in his earlier post in this space).

Why does this matter? Because more and more of the basic institutional needs of Americans depend on local information markets. For example, local school systems are rapidly expanding school choice and charter schools. When newspapers had robust education beats, they might regularly (or at least annually) report on the quality of specific local schools, providing parents at least some chance to receive good information about where to send their children. But as education reporting declines, there is no evidence that the Internet is taking its place. For several years, one of the highest-achieving charter schools in Washington, D.C., had trouble meeting its enrollment quotas, suggesting that a robust information market in the capital does not exist.

Of course, this is an anecdote, and that’s precisely the point. There is much about community information needs that we just don’t know. And the only way to know more is through high quality research. That’s exactly what the FCC is trying to do before making critical decisions on newspaper-broadcast cross-ownership that could further reduce the production of local community information, or allowing the expansion of  national cable concentration and greater control of local broadband markets that, for most Americans, are poorly performing, overpriced duopolies. To fail to even ask the questions would be an abnegation of its responsibility to the public interest.

This entry was posted in Uncategorized by Greg D.. Bookmark the permalink.

About Greg D.

Greg Downey is a Professor in both the School of Journalism and Mass Communication and the School of Library and Information Studies at the University of Wisconsin-Madison. He uses historical and geographical methods to uncover and analyze “information labor” over time and space. Downey is the author Telegraph Messenger Boys: Labor, Technology, and Geography 1850-1950 (2002), Closed Captioning: Subtitling, Stenography, and the Digital Convergence of Text with Television (2008), and Technology and Communication in American History (2011).