SUMMARY: With reports of “fake news” during the 2016 election and the President of the United States referring to the media as “the enemy of the people,” journalists are facing new questions about public trust in news organizations. For instance, in today’s highly charged political climate, which news sources are trusted and which ones are not? And to what degree does media trust explain individual decisions to financially support news organizations? This report—commissioned on behalf of the Trusting News project by the Reynolds Journalism Institute (RJI) at the University of Missouri—sheds light on this topic. The goal of the Trusting News project is to better understand elements of trust and distrust in the relationship between journalists and nonjournalists. Toward this end, the Trusting News project worked with 28 newsrooms to collect data from different media audiences from across the United States. This report provides a description of the data and summarizes the results from statistical analysis of the data.


Data collection

Data were collected in the February and March 2017 using an online survey made available to users (N = 8,728) of the digital media platforms of twenty-eight different newsrooms across the United States. Newsrooms included were Annenberg Media, Ball State Daily News, Casper Star-Tribune, Cincinnati Enquirer, Coloradoan, Columbia Missourian, Dallas Morning News, Denver Post, Evergrey, Fort Worth Star-Telegram, Fresno Bee, Jacksboro, Herald-Gazette, Kansas City Star, KUT, Lima News, Minneapolis Star Tribune, NBC, Ogden Standard-Examiner, Rains County Leader, San Angelo Standard-Times, Skagit Publishing, Springfield News-Leader, St. Louis Magazine, St. Louis Public Radio, Steamboat Pilot & Today, USA TODAY, WCPO, and WDET. Participation was strictly voluntary—no compensation was provided. Most newsrooms made reference to the survey on their websites and social media accounts. Some mentioned it in print and on air. For the most part, the survey was made available by newsrooms around the same time. However, the duration and level of participation did vary, as will be addressed below.


Due to unbalanced participation rates across newsrooms, it is possible a single newsroom with a high response rate could systematically bias statistical analyses. To address this concern, several steps were taken. First, in addition to having the names of the newsrooms associated with each observation, zip codes were reported by nearly all respondents (99.6%) in the sample. Although not perfect, a spatial representation of the zip codes like the one presented in the figure below provides evidence of heterogenous geographical coverage above and beyond what one might otherwise expect after only observing the number of responses for each of the newsrooms. Second, weights were calculated assuming it would be more desireable to have an equal number of responses from each news room. Group level means were examine for a number of different cross-sections and no discernable pattern distinguishing the weighted and unweighted samples emerged. Finally, in addition to the linear regression models reported in the following section, multilevel models were also performed to directly model variability explained by differences between newsrooms rather than differences between individuals. As was the case with the survey weights, the results appeared consistent across all statistical solutions.

Figure: Scatter plot of zip codes colored by newsroom