By the authority granted to me as President under the Constitution and U.S. law, the following is ordered:
Section 1: Policy
Free speech is a fundamental pillar of American democracy. The Founding Fathers safeguarded this essential right through the First Amendment. The ability to express and debate ideas is the basis for all other freedoms enjoyed by a free society.
In a nation that has long valued open expression, it is unacceptable for a small number of online platforms to control what speech Americans can access and share on the internet. Such actions contradict American democratic principles. When major social media companies suppress viewpoints they oppose, they wield an alarming amount of influence. At that point, they no longer serve as neutral platforms but should instead be considered active content publishers.
The rapid expansion of online platforms in recent years raises critical concerns about how First Amendment principles apply to modern communication technologies. Today, millions of Americans rely on social media and digital platforms to stay informed, connect with others, and express opinions. In many ways, these platforms now serve as the digital equivalent of the public square in the 21st century.
Twitter, Facebook, Instagram, and YouTube hold significant influence, giving them the ability to shape public perception, restrict or remove content, and control the flow of information. Their decisions determine what users can and cannot see, impacting discussions on major events.
As President, I have emphasized the importance of free and open debate in the digital space. Just as discussions thrive in classrooms, town halls, and homes, the same principles should apply online. Open dialogue is essential to preserving democracy.
Many online platforms have engaged in selective censorship that negatively affects public discourse. Tens of thousands of Americans have raised concerns about platforms marking content as inappropriate despite it complying with their terms of service. Others have reported sudden, unexplained policy changes that seem to suppress certain viewpoints. In many instances, content and accounts have been removed without notice, explanation, or an option for appeal.
Twitter has taken a selective approach to labeling tweets, displaying clear political bias in its decisions. Reports indicate that no other politician’s tweets have been flagged in the same manner. Just last week, Representative Adam Schiff continued to mislead followers by pushing the long-debunked Russian Collusion narrative, yet Twitter chose not to label those posts. Meanwhile, the individual overseeing "Site Integrity" has openly expressed political bias in personal tweets.
While these platforms apply inconsistent and unfounded rules to restrict free speech domestically, many continue to profit from and amplify propaganda from foreign governments, including China. One U.S. company, for instance, developed a search engine for the Chinese Communist Party that censored terms like "human rights," concealed unfavorable data, and enabled user tracking for surveillance. Additionally, other companies have accepted paid advertisements from the Chinese government that spread misinformation about the mass detention of religious minorities, facilitating human rights violations. These platforms have also allowed Chinese officials to spread false claims about the origins of COVID-19 and suppress voices supporting pro-democracy protests in Hong Kong.
As a country, it is essential to encourage and safeguard a variety of perspectives in today's digital space, ensuring every American has a platform to be heard. Online platforms must be held accountable and transparent, with standards and tools in place to uphold the integrity and openness of public discussions and the right to free expression.
Section 2: Safeguards Against Online Censorship
(a) The United States is committed to establishing clear guidelines that support open and free discussions on the Internet. A key part of these guidelines is the legal immunity granted by Section 230(c) of the Communications Decency Act (47 U.S.C. 230(c)). The government aims to clarify the extent of this immunity so that it is not misused. The protection should not extend to platforms that claim to promote free speech but instead manipulate their influence over digital communication channels to unfairly suppress particular viewpoints, thereby restricting open debate.
Section 230(c) was originally created to address early legal rulings that determined if an online platform removed some content, it could be classified as a "publisher" of all the content on its site, making it liable for issues such as defamation. The purpose of this provision is evident in its title, which grants limited liability protection to interactive computer service providers, such as online platforms when they engage in "Good Samaritan" efforts to block harmful content.
Congress aimed to protect these platforms when they removed material deemed harmful, particularly content that could be dangerous to minors. Lawmakers wanted to ensure that platforms would not hesitate to take action against harmful posts. Additionally, this provision was meant to support the broader goal of maintaining the Internet as a space for a wide range of political opinions. According to 47 U.S.C. 230(a)(3), the law was designed to promote diverse discussions. The protections granted under this law should always be interpreted with these original intentions in mind.
Subparagraph (c)(2) specifically outlines protections against civil liability. It states that interactive computer service providers cannot be held responsible for restricting access to content they consider to be obscene, violent, harassing, or otherwise inappropriate as long as they act in good faith. U.S. policy aims to prevent the misuse of this provision, ensuring that legal protections are not extended to platforms that suppress viewpoints under false pretenses rather than genuinely removing harmful content.
Section 230 was not designed to allow a small number of corporations to dominate online conversations under the claim of fostering open discussion, only to then shield themselves from liability when they selectively silence opinions they oppose. If a platform removes or limits content in a way that does not align with subparagraph (c)(2)(A), it is engaging in editorial decision-making rather than neutral moderation. As a result, such platforms should no longer benefit from the limited liability protections of subparagraph (c)(2)(A) and should instead be treated like any traditional publisher or editor, making them accountable for their actions just as non-digital publishers are.
(b) To implement the policy outlined in subsection (a), all executive departments and agencies must ensure their interpretation and enforcement of section 230(c) align with its original, limited intent. They should take all necessary steps to uphold this interpretation. Additionally, within 60 days of this order, the Secretary of Commerce, in coordination with the Attorney General and acting through the National Telecommunications and Information Administration (NTIA), must submit a petition for rulemaking to the Federal Communications Commission (FCC). This petition should urge the FCC to promptly introduce regulations that clarify:
(i) How subparagraphs (c)(1) and (c)(2) interact, specifically defining the conditions under which an interactive computer service provider restricts content in ways not explicitly covered under subparagraph (c)(2)(A) may also lose protection under subparagraph (c)(1). The latter only ensures that providers are not treated as publishers or speakers when hosting third-party content, but it does not address their accountability for their own editorial choices.
(ii) The specific circumstances under which restricting access to content is not considered "in good faith" under subparagraph (c)(2)(A) of section 230. This includes determining whether actions fail to meet the "good faith" standard if they are:
(A) Misleading, unjustified, or contradicting a provider’s stated terms of service; or
(B) Taken without offering sufficient notice, a clear explanation, or a fair chance for affected users to respond;
(iii) Any additional regulatory measures that the NTIA determines would help fulfill the objectives outlined in subsection (a) of this section.
Section 3: Preventing Federal Funding for Platforms That Limit Free Speech
(a) The head of each executive department and agency must assess their agency’s spending on advertising and marketing directed toward online platforms. This evaluation should include total expenditures, a list of platforms receiving federal funds, and the legal authorities available to limit such funding where necessary.
(b) Within 30 days of this order, each agency head must submit a report on these findings to the Director of the Office of Management and Budget.
(c) The Department of Justice will analyze the speech restrictions enforced by online platforms listed in the report from subsection (b). It will evaluate whether these platforms engage in viewpoint discrimination, mislead consumers, or partake in other questionable practices that could make them unsuitable channels for government communication.
Section 4: Federal Review of Unfair or Deceptive Practices
(a) The United States upholds the principle that major online platforms, including Twitter and Facebook, should not limit lawful speech. As key facilitators of modern communication, these platforms play a vital role in the exchange of ideas and information.
The Supreme Court has recognized that social media sites serve as a digital public square, offering individuals one of the most effective ways to share their voices. These platforms have become essential tools for civic engagement, including communication with elected officials. They serve as critical spaces for public discourse and the free exchange of opinions.
(b) In May 2019, the White House introduced the Tech Bias Reporting tool, allowing Americans to report instances of online censorship. Within weeks, over 16,000 complaints were submitted, alleging that online platforms took action against users due to their political beliefs. These complaints will be forwarded to the Department of Justice and the Federal Trade Commission (FTC) for review.
(c) The FTC will evaluate whether any actions violate laws against unfair or deceptive business practices under section 45 of Title 15, United States Code. This includes determining if platforms covered under Section 230 limit speech in ways that contradict their public statements about content moderation policies.
(d) For large online platforms that serve as key spaces for public discussions, including Twitter, the FTC will assess whether complaints reveal potential legal violations related to the policies outlined in section 4(a) of this order. Additionally, the FTC will explore creating a report summarizing these complaints and making it publicly available in accordance with relevant legal guidelines.
Section 5: State Oversight of Unfair or Deceptive Practices and Anti-Discrimination Laws
(a) The Attorney General will create a working group to explore how state laws can be enforced against online platforms engaging in deceptive or unfair practices. This group will also draft model legislation for states lacking legal protections against such practices. State Attorneys General will be invited to participate in discussions and offer input, ensuring compliance with relevant laws.
(b) Complaints referenced in Section 4(b) will be shared with this group as permitted by law. Additionally, the group will gather publicly accessible information regarding:
(i) Users facing increased scrutiny based on their interactions or the accounts they follow.
(ii) Algorithms designed to suppress content or limit visibility based on political views or affiliations.
(iii) Unequal enforcement of platform policies, permitting behavior that would typically be restricted when conducted by accounts tied to the Chinese Communist Party or other anti-democratic entities.
(iv) Use of third-party organizations, including contractors, media companies, and individuals with known biases, to review content.
(v) Restrictions on the ability of users with particular viewpoints to monetize content compared to others in similar situations.
Section 6: Legislative Proposal
The Attorney General will draft a legislative proposal aimed at furthering the policy objectives outlined in this order.
Section 7: Definition
For the purposes of this order, "online platform" refers to any website or application that enables users to create and share content, participate in social networking, or function as a general search engine.
Section 8: General Provisions
(a) This order does not override or interfere with:
(i) Any authority granted by law to federal agencies or their leadership.
(ii) The responsibilities of the Director of the Office of Management and Budget concerning budgetary, administrative, or legislative matters.
(b) The implementation of this order must comply with applicable laws and depend on available funding.
(c) This order does not establish any legal or enforceable rights or benefits for any individual or entity against the United States government, its agencies, officials, employees, or any other person.