Introduction
What is the Social Media Election Policy Tracker?
The Social Media Election Policy Tracker is a tool created to broadly chronicle a timeline of selected social media companies’ evolving policies which have impacted the information environment related to U.S. elections and campaigns from 2016 – the present.
For the sake of this project, we define election policies as the rules, guidelines, and other actions social media platforms have established to manage the use of their services during election cycles and to information related to elections. This encompasses a wide range of issues related to the electoral process. Examples of these efforts include those aimed at promoting voter registration, providing accurate election information, combating foreign election interference, mitigating the spread of false or misleading information about elections, and other policies surrounding political advertising. Our scope also extends beyond formal policies to include internal changes made by social media companies that directly impact election information environments. Examples of these changes are adjustments to election integrity team staffing and the implementation of specialized tools like AI-generation labels for synthetic election-related content.
The Tracker allows you to explore, compare, and filter through different categories of election policies over time by social media platform. We believe this will be especially helpful for journalists and researchers looking to understand trends and patterns regarding how social media companies have adapted their policies and actions in response to evolving pressures and challenges over time.
Context
Why Did We Make the Social Media Election Policy Tracker?
Tracking the evolutions and developments in social media election policies over time is a valuable tool for understanding how these platforms' actions align with major news, social shifts, and political events. The foreign influence attempts with respect to the 2016 US presidential election served as a wakeup call regarding the power social media companies have over the information, news, and advertisements people see or don’t see, which have the potential to affect their voting behavior. Our tracker begins at this moment when social media companies first started facing wide-spread pressure from Congress and their users to address these issues. Companies responded with efforts to self-regulate via a series of content moderation policies and platform changes during the 2018 midterms and 2020 elections, which are documented in this Tracker.
Despite the changes, social media platforms were hotbeds of false information regarding the 2020 elections, including the “Stop the Steal” conspiracy theories of voter fraud. Such discord spilled into real-life violence on January 6, 2021, during the U.S. Capitol riots. Some platforms decided to limit content in the immediate aftermath of the attacks. Others did not.
Social media platforms have faced a backlash for their decisions, caught between accusations of censorship, bias, and political motivations for policies on the one hand, and concerns that they are stepping back from earlier useful—if imperfect—efforts on the other. For instance, critics worry that companies are understaffing the teams intended to combat election interference and mis/disinformation. Our work seeks to track the ebbs and flows of these decisions over time.
As we approach the 2024 US election cycle, we anticipate further, potentially significant, changes in how platforms handle election-related content. The Tracker documents these shifts by capturing specific policy changes, strategic platform design choices, and key decisions, alongside other newsworthy events that shape the broader information landscape. While some policies are US-focused, many reflect broader platform strategies with global implications—especially important in a year when nearly half the world's nations will hold national elections. We created the Tracker to support those interested in understanding information landscapes, elections, and the complex relationship between technology companies, platforms, and democratic processes.
Methods
How Did We Create the Social Media Election Policy Tracker?
The data presented in this tracker starts in the wake of the 2016 US presidential election and is drawn from review of social media companies' blog posts; corporate reports and presentations made available to the public; Congressional testimony; date stamped Terms and Conditions; and media coverage. Where possible, we traced evolution in company policies in archived websites (using the web archive Wayback Machine), although this was complicated by changing URLs. We anticipate this will continue to happen, showcasing the importance of maintaining this archive of policies.
We chose to include the following social media platforms for several reasons:
- Number of Users: YouTube and Facebook are the most widely used online platforms, followed by Instagram. We group these by companies, Google being the parent company of YouTube and Meta being the parent company of Facebook, Instagram, and Threads.
- Notoriety and Impact: Twitter—or X—and TikTok are lower in users but have been particularly newsworthy given concerns of foreign influence and demonstrable impacts on elections. Some other platforms, such as Pinterest and LinkedIn, have more users but less impact in this context. We note, however, that they are also less studied.
- Diversity in Sample: We also include examples of significantly smaller apps like Gab and Parler that are advertised and used as alternate platforms to the bigger players and widely known as means for circulating election-related information. They often showcase a different orientation towards policy than the more widely adopted platforms and are helpful data points for comparison.
Mechanics
How Do I Use the Social Media Election Policy Tracker?
You may choose between different viewing options (the full timeline and a platform comparison timeline) and other policy-type filtering abilities while using the Tracker. For each policy record you will find a title describing the policy, source(s), and expandable details that overview the policy change as per source materials.
Full timeline: This view will show all selected categories on a single timeline. Here, you can filter by platform, as well as by category. Please note, some fall under more than one category and as such are double tagged. The categories include:
- Content Moderation: Policies and tools for addressing misinformation, hate speech, and other forms of harmful content.
- Staffing: Changes in election integrity teams.
- Tools: The creation of a new tool or assessment technique that has been developed regarding issues surrounding election information, election integrity, or political advertising.
- Election Interference: Policies and actions related to combating election interference or those related to voter suppression.
- Voting: Actions aimed at promoting voting or other general voting issues not related to election interference.
- Political Advertising: Policies and tools governing political ads.
- Major Platform Actions: Platform-wide changes or decisions regardless of the specific categories they might affect; meant to capture high-level or wide-scope platform actions.
- Comparison timeline: Here you can view side by side comparisons of up to three platforms. To change the platforms, click the down arrow next to the platform name. If you wish to add a third platform for comparison, click the
+Add
button in the upper right.
Limitations and Implications
What Else Should I Consider When Using the Tracker?
We faced many constraints when compiling the data displayed in this tracker including limitations on finding information, difficult decisions about what to include and exclude, and the quality of the information available to us regarding these policies. Indeed, it is important to understand that many of the decisions in the Tracker were reported and framed by the companies who run the social media platforms, meaning there was a specific rhetorical bent to the release of information.
Content moderation policies include a wide range of concerns, often touching on spam, hate speech, monetization, sexual exploitation, bullying and more. Some of these policies, although critical, are outside the direct domain of elections; though they can certainly be applied to election-related posts. Others, such as policies pertaining to fraud, spam, misinformation and inauthentic behavior have variable application to election content. We used our judgment as to what falls within our stated parameters but recognize that inclusion/exclusion of specific policies and events is subject to debate.
One challenge in documentation is the quiet retirement of policies and tools. Social media companies make a practice of publicly rolling out new tools and functions aimed at election integrity. However, they are not consistently open and transparent concerning the discontinuation of these tools (there are exceptions, such as Twitter's postmortem of the 2020 election). Unless we found documentation that a tool had been discontinued, it is our assumption that it will continue to be utilized in future election cycles. Sometimes, this assumption will be proven false.
Noting these constraints and limitations, we recognize that this tracker is not exhaustive. Nonetheless, we have endeavored to create a resource that is both thorough and focused on how social media giants have evolved their policies towards election-relevant content and integrity.
This tracker is still in development as we continue to research and add social media election policy records. We intend to continue to update this tracker monthly.
Questions or things we’ve missed?We invite you to reach out to us at cyber@pitt.edu.