- Blog
- AI Bill of Rights
AI Bill of Rights
Among the biggest challenges to democracy today is the way technology, data, and automated systems are used in ways that put the rights of Americans at risk. Too often, these tools limit opportunities and restrict access to essential resources and services. These issues are well-documented. In the U.S. and globally, systems designed to improve patient care have sometimes proven unsafe, ineffective, or biased. Algorithms used in hiring and credit assessments have been found to reinforce existing inequalities or introduce new forms of bias and discrimination. Additionally, unchecked data collection on social media has been misused to limit opportunities, invade privacy, and monitor individuals' activities, often without their awareness or consent.
These negative impacts are serious, but they are not unavoidable. Automated systems have led to significant advancements, from improving agricultural productivity and predicting storm patterns to diagnosing diseases with greater accuracy. These technologies influence decisions across industries, while data-driven innovations continue to transform the global economy. With American ingenuity at the forefront, these tools have the potential to reshape society and improve lives.
However, this progress must not come at the cost of civil rights or democratic values – core American principles that President Biden has emphasized as fundamental to his Administration. On his first day in office, he directed the entire Federal government to work toward eliminating inequality, ensuring fairness in decision-making, and actively promoting civil rights, equal opportunity, and racial justice. The President has spoken out on the pressing threats to democracy and has consistently urged action to protect civil rights, including privacy rights, which he has described as the foundation for many other rights that people often take for granted.
To support President Biden’s vision, the White House Office of Science and Technology Policy has outlined five principles to guide the development, application, and oversight of automated systems to safeguard the American public in the era of artificial intelligence. The five major principles below offer essential guidance whenever automated systems significantly impact people's rights, access to opportunities, or essential services.
Safe and Effective Systems
You should be protected from systems that are unsafe or ineffective. Automated systems must be developed with input from diverse communities, experts, and stakeholders to identify risks, concerns, and potential impacts. Before deployment, these systems should undergo thorough testing, risk assessment, and ongoing monitoring to ensure they are both safe and effective for their intended purpose. They must also work to prevent harmful outcomes, including unintended consequences, and comply with industry-specific standards. If a system fails to meet these safety measures, it should not be deployed or should be removed from use.
Automated systems should not be created in a way that could knowingly or predictably put individuals or communities at risk. Instead, they should be designed to proactively prevent harm, even from unintended but foreseeable uses or consequences. Additionally, protections should be in place to prevent the misuse of irrelevant or inappropriate data during development, implementation, and continued use. Reusing such data should not compound potential harm.
Independent assessments should be conducted to confirm that a system is both safe and effective. These evaluations should document steps taken to minimize risks. Whenever possible, the results of these evaluations should be made publicly available to ensure transparency and accountability.
Why this Principle is Important
While technology is being developed to address a wide range of challenges, its increasing use can sometimes lead to deployment in areas where its effectiveness has not been fully tested or proven. In some cases, these systems do not perform as intended, resulting in significant and unjustified harm. Automated systems often pull data from other sources, including historical records, which can introduce irrelevant information into decision-making processes. This can lead to unintended consequences, particularly when past biases carry over into new applications.
There are also instances where technology is deliberately designed to cause harm, such as tools created to enable stalking. In other situations, intended or unintended uses result in unexpected dangers. Many of these harms are preventable, and efforts are being made to address them. Some companies have taken proactive steps by integrating ethical reviews into key development decisions, while others have identified and reduced risks through thorough pre-deployment testing and continuous monitoring. Government agencies at various levels have public consultation processes that can be used to evaluate new automated systems, and existing product development and safety standards already help prevent some risks to the public. However, these protective measures are not consistently applied, and their use remains limited. Expanding proactive safeguards could build on these existing efforts, strengthen trust in automated systems, and improve public safety. Innovators need clear guidelines that encourage technological growth, while the public should be protected from unsafe outcomes. Everyone benefits when automated systems are designed, tested, and consistently reviewed to ensure they function correctly and do not cause predictable harm.
- A proprietary model designed to predict sepsis in hospitalized patients was implemented in numerous hospitals across the country. However, an independent study found that the model performed worse than its creators claimed. Additionally, it triggered too many unnecessary alerts, causing "alert fatigue" among medical staff.
- On social media, Black users who responded to racist messages with criticism or counter-speech had their own posts removed when automated moderation systems failed to differentiate between hate speech and responses aimed at exposing or challenging it.
- A tracking device originally created to help people find lost items has been misused by stalkers to track individuals without their consent. After release, the manufacturer introduced alerts to notify users if a device was following them and added periodic sounds to the device. However, not all phones receive the notification, and the device remains a safety concern.
- An algorithm designed to guide police deployment was found to repeatedly send officers to the same neighborhoods, even when those areas did not have the highest crime rates. This resulted from a feedback loop where previous arrest data and past algorithmic predictions influenced future decisions.
- AI-powered software capable of generating non-consensual, digitally altered nude images has rapidly spread. These tools, which require no technical expertise, have become a growing form of image-based abuse, disproportionately affecting women. As technology advances, the manipulated images are becoming more realistic and harder to detect. Whether or not the images are genuine, the harm experienced by victims is significant, affecting their personal lives, careers, and mental health.
- A company installed AI-powered cameras in delivery vans to monitor driver safety. However, the system wrongly penalized drivers when other vehicles cut them off or when incidents outside their control occurred. As a result, some drivers were unfairly denied bonuses due to inaccurate assessments.
Algorithmic Discrimination Protections
You should not experience discrimination from automated systems, which must be designed and used fairly. Algorithmic discrimination happens when these systems cause unfair treatment or negative effects based on factors such as race, color, ethnicity, sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, genetic information, or any other legally protected classification. In some cases, this type of discrimination may violate existing laws.
To prevent algorithmic discrimination, those involved in designing, developing, and implementing automated systems must take proactive and ongoing steps to ensure fairness. These efforts should include conducting equity assessments during system design, using diverse and representative data, and preventing the use of demographic proxies that can reinforce bias. Accessibility for people with disabilities must also be prioritized at every stage, from design to deployment. Additionally, pre-launch and ongoing testing should be conducted to detect and address disparities, with clear oversight to ensure fairness.
Independent evaluations and transparent reports, such as algorithmic impact assessments, should be completed to assess bias and document mitigation efforts. Whenever possible, these findings should be publicly shared to confirm that protections are in place and that the system operates equitably.
Why this Principle is Important
There is substantial evidence that automated systems can reinforce existing inequalities and create unfair outcomes. When data does not account for systemic biases in society, it can lead to unintended but serious consequences. For example, facial recognition technology has been linked to wrongful and discriminatory arrests, hiring algorithms have contributed to biased hiring decisions, and healthcare algorithms have downplayed the severity of certain conditions in Black patients. Discriminatory practices embedded in artificial intelligence and other automated systems affect multiple industries and areas of daily life.
While automation has the potential to drive innovation and progress, it is crucial to integrate protections against algorithmic discrimination into every stage of design, deployment, and use. Some companies, non-profits, and government agencies are already taking steps to reduce these risks. Certain companies have implemented bias testing as part of their product development, and in some cases, these tests have led to product adjustments or prevented the launch of harmful systems. Federal agencies have been working on standards and guidelines to ensure fairness in the use of automated systems. Additionally, non-profits and private organizations have developed best practices for auditing and assessing impacts to detect bias and enhance transparency. However, more action is needed to safeguard the public from algorithmic discrimination and to ensure that automated systems are designed and used fairly.
The protections in place against discrimination in everyday life must also extend to digital environments. Basic safeguards should prevent bias, discrimination, and misuse of technology to ensure fair treatment in critical areas such as hiring, loan approvals, healthcare, and interactions with the criminal justice system. Addressing these concerns should also go beyond existing regulations to evaluate the broader impact of automated systems on historically underserved communities and create proactive protections that benefit them.
- A loan underwriting system that used nontraditional factors such as education and work history was found to disproportionately charge higher refinancing rates to applicants who attended Historically Black Colleges and Universities (HBCUs), even when controlling for other financial factors.
- A hiring algorithm trained on data from a company where most employees were men rejected female candidates based on unrelated indicators. Resumes that included words like "women’s," such as “women’s chess club captain,” were ranked lower in the selection process.
- A predictive model used by over 500 universities to assess student dropout risk factored in race, leading to racial disparities. Black students were up to four times more likely than their white peers to be flagged as high risk, influencing how advisors guided students toward certain majors.
- A risk assessment tool used to predict recidivism among individuals in federal custody showed racial disparities. The tool overestimated the likelihood of general recidivism for some racial groups while underestimating the likelihood of violent recidivism for others. The Department of Justice has been working to reduce these disparities and has released a report outlining its review of the tool.
- An AI-powered sentiment analysis tool used by tech platforms showed bias against certain groups. For instance, the system flagged the phrase "I’m a Jew" as negative while labeling "I’m a Christian" as positive. This bias could lead to wrongful content moderation, such as blocking posts like "I’m gay." One company has since made its data public to encourage researchers to help correct the issue.
- Search engines have been found to return overly sexualized content when searching for terms such as “Black girls,” “Asian girls,” or “Latina girls” instead of educational resources, toys, or role models. Some platforms are working to address this issue, but it remains a problem.
- Ad delivery algorithms that predict who is most likely to click on certain job ads have reinforced stereotypes, directing cashier job postings mostly to women and taxi-driving positions primarily to Black users.
- TSA body scanners require operators to choose a "male" or "female" scanning mode based on their perception of a passenger’s gender identity, which has led to transgender travelers being disproportionately flagged for additional screenings. Many have reported experiencing distressing and invasive screenings. TSA has announced plans to develop a gender-neutral scanning algorithm while improving overall security measures.
- The National Disabled Law Students Association has raised concerns that AI-powered remote proctoring software unfairly flags students with disabilities as suspicious due to accessibility needs, such as requiring longer breaks or using assistive technology like screen readers or dictation software.
- An algorithm designed to identify patients in need of high-priority medical care assigned lower urgency scores to Black patients than white patients with similar chronic conditions. Additionally, some clinical algorithms adjust medical recommendations based on race or ethnicity, which can contribute to disparities in healthcare treatment.
Data Privacy
You should be safeguarded from harmful data practices through built-in protections and should have control over how your data is used. Privacy must be protected by default, ensuring that data collection aligns with reasonable expectations and is limited to what is necessary for a specific purpose.
Those involved in designing, developing, and implementing automated systems must seek user permission and respect choices regarding data collection, access, use, transfer, and deletion whenever possible. When direct user control is not feasible, privacy-focused design measures should be in place. Systems should not use confusing design elements that make it difficult for users to make informed choices or set invasive data collection as the default option.
Consent should only be relied upon for data collection when it is meaningful and informed. Requests for consent must be clear and easy to understand and provide users with real control over how their data is used. The current complex and vague notice-and-choice model for broad data use should be improved to ensure transparency.
Stronger protections and stricter limitations should apply to data related to sensitive areas such as health, employment, education, criminal justice, and finance, as well as information concerning minors. In these areas, data should only be used for essential functions and should be subject to ethical review and strict limitations.
Communities should not be subjected to unchecked surveillance. Surveillance technologies should undergo thorough assessments before deployment to evaluate potential risks and establish clear limits that protect privacy and civil liberties. Ongoing monitoring should not be used in workplaces, schools, housing, or any settings where such practices could restrict rights, opportunities, or access. Whenever possible, users should be provided with reports confirming that their data preferences have been honored and detailing how surveillance technologies may impact their rights and opportunities.
Why this Principle is Important
Data privacy is a fundamental principle that supports all other protections in this framework. Surveillance, data collection, sharing, and reuse have become central to business models across industries, with more companies tracking consumer behavior, compiling detailed personal profiles, and feeding this data into automated systems. These systems further monitor, categorize, and impact individuals' lives.
Government agencies, including law enforcement, also develop and use various surveillance technologies that collect data, which is then used in automated decision-making. However, federal laws have not kept pace with the growing scale of private data collection or the ability of government entities to access and utilize that data. Meanwhile, many people are unable to view or control their personal information, as data brokers gather consumer data from multiple sources without permission or awareness.
The use of faulty or outdated data in decision-making can lead to unfair outcomes, such as denying someone a loan or a job. Surveillance in workplaces and schools has increased, and when combined with decision-making processes, it can contribute to mental health issues such as anxiety, depression, and reduced critical thinking skills. Reports show that data brokers are compiling information to profile entire communities in harmful ways.
The consequences of widespread data collection are damaging, leading to distrust, anxiety, and mental health struggles. It discourages free speech, protest, and worker organization while posing risks to democratic processes. Stronger protections are needed to address these growing threats. Some companies have started integrating consumer privacy protections into their products by default, including reducing data collection, improving transparency, and strengthening security measures.
Federal government data collection is subject to legal protections that help safeguard civil liberties and, in some cases, limit how long data can be stored. Some states have also introduced consumer privacy laws to mitigate these risks. However, there is still no comprehensive national legal framework governing personal data rights. While existing laws regulate data use in certain areas such as healthcare, employment, education, and credit, there is often uncertainty about how these regulations apply in other settings, particularly as automation becomes more prevalent.
Additional protections are necessary to reassure the public that automated systems are not tracking their activities, gathering personal information, or conducting surveillance without proper legal authority or context-specific consent.
- An insurance provider may use data from a person’s social media activity to determine their life insurance rates.
- A data broker collected vast amounts of personal data but suffered a security breach, putting hundreds of thousands of people at risk of identity theft.
- A local housing authority installed facial recognition cameras at public housing entrances to assist law enforcement in identifying individuals. Video footage of both residents and non-residents was sent to local police for facial recognition analysis.
- Companies have used surveillance software to monitor employee discussions about union activities, gathering data to track individual employees and secretly interfering in those conversations.
Notice and Explanation
You should be aware when an automated system is being used and understand how and why it affects decisions that impact you. Those responsible for designing, developing, and deploying automated systems must provide clear, easily accessible documentation that explains how the system works, the role automation plays in decision-making, and who is accountable for its operation. Users should be notified when these systems are in use and receive clear, timely explanations of how outcomes are determined. Any major updates to the system’s functionality or use should be communicated to those affected.
It is important to understand how an automated system contributes to decisions, even when it is not the only factor influencing the outcome. Explanations provided by these systems must be accurate, useful, and appropriate for the level of risk involved in a given situation. They should also be clear enough for both users and system operators to interpret.
Whenever possible, public reports should be made available in plain language. These reports should summarize the function of automated systems and evaluate whether notifications and explanations meet standards for clarity and transparency.
Why this Principle is Important
Automated systems now influence key life opportunities, from job hiring to credit approvals, and shape everyday experiences in areas such as courtrooms and online education. These systems have a significant impact, yet their role is not always visible. Job applicants may not know whether a hiring decision was made by a person or an algorithm that ranked them lower. A defendant in court may be unaware that a judge’s bail decision was influenced by an automated system labeling them as high risk.
Without clear notice or explanations, people lack the ability to challenge decisions or correct errors. Transparency is also essential for ensuring accuracy and fairness, allowing experts to assess whether an automated system’s recommendations are reasonable. The public must be informed when automated systems are being used. Clear, simple, and understandable notification is necessary to support other protections in this framework.
Many automated systems operate in ways that are complex and difficult to understand, either by design or oversight. This lack of transparency makes it harder to hold systems accountable, but it should not be an excuse for failing to provide meaningful explanations to those affected. Explanations should be a basic requirement when these systems make decisions that impact people’s rights, access, and opportunities.
Providing notice has long been a common and sometimes legally required practice, such as when video recordings are made in public spaces. In some areas, like credit lending, companies are required to notify consumers and explain decisions. Research is actively improving methods to make automated decision-making more understandable. Companies and researchers are working on new approaches to provide clearer explanations, helping the public understand how these systems influence their lives. While some sectors already have rules in place for transparency, people should have consistent access to information when an automated system affects their rights or opportunities. Ensuring clear communication builds confidence and trust in these systems.
- A lawyer representing an elderly client with disabilities struggled to understand why Medicaid-funded home health care assistance was suddenly cut off. During a court hearing, a witness revealed that the state had recently implemented a new algorithm to determine eligibility, but the lack of prior explanation made it difficult to challenge the decision.
- A child welfare investigation was launched against a parent based on an algorithmic risk assessment, yet the parent was never informed that their data was being collected and used in this way. Without notice or an explanation, those assessing child maltreatment risk had no way to verify the system’s accuracy, and the parent was denied the ability to contest the decision.
- A predictive policing system claimed to identify individuals most at risk of committing or becoming victims of gun violence by analyzing social ties, criminal history, past incidents of gun violence, and other data. However, individuals were added to a watchlist without any explanation or transparency regarding how the system reached its conclusions. Both the public and law enforcement need to understand how such decisions are made.
- A benefits system changed its eligibility criteria without notice. Some individuals were wrongly denied benefits due to data entry errors and system flaws, which were only discovered after an explanation was requested. Without transparency, these errors remained uncorrected for longer than necessary.
Human Alternatives, Consideration, and Fallback
You should have the option to opt out when appropriate and be able to reach a person who can quickly address and resolve any issues you face. When possible, you should be given the choice to opt out of automated systems in favor of human assistance. The availability of this option should be based on reasonable expectations in different situations, ensuring accessibility and preventing significant harm. In some instances, laws may require a human or alternative process.
If an automated system fails, produces an error, or if you wish to challenge its impact on you, there should be a timely process for human review and resolution. This backup system should be fair, effective, easy to access, and maintained with proper training for those responsible for handling concerns. It should not create unnecessary difficulties for the public.
Automated systems used in critical areas such as criminal justice, employment, education, and healthcare should be designed to serve their intended purpose while allowing oversight and human review for major or high-risk decisions. Those interacting with these systems should receive proper training. Public reports should be provided whenever possible, detailing how these human review processes work, their effectiveness, and how accessible they are to those affected.
Why this Principle is Important
There are many reasons why people may prefer not to rely on automated systems. These systems can be flawed, leading to unexpected outcomes, reinforcing bias, or being difficult to access. They may also be inconvenient or replace manual processes that people are accustomed to using. However, individuals often have no choice but to use these systems or must go through a frustrating process to reach a human decision-maker when they no longer wish to rely on automation.
Because of the lack of human oversight, many people face delays or even lose access to rights, benefits, and essential services. When important opportunities or access to services are on the line, individuals should have the ability to opt out of automation and choose a human alternative without being penalized. In some situations, legal requirements mandate a human option, such as reasonable accommodations for people with disabilities.
In addition to the ability to opt out, there should also be a human fallback system to intervene when an automated system fails or causes harm. No matter how well a system is tested, errors will still occur. In time-sensitive situations, individuals should not have to wait for assistance – human intervention must be available immediately. In many critical situations, such safeguards already exist. For example, a building manager can open a door if an automated access system malfunctions.
Automated systems play a role in sensitive areas such as criminal justice, employment, education, and healthcare, where they are used for everything from pre-trial risk assessments to diagnosing illnesses. Without proper safeguards, these systems can lead to unfair, inaccurate, or even dangerous outcomes. These high-stakes situations require strong human oversight to prevent harm.
Some protections have already been implemented in certain areas. When automation is introduced into government services, manual processes like paper and phone-based applications are often still available as alternatives. Many automated customer service systems still allow users to press zero to speak with an operator. At airports and border crossings, human staff assist travelers who encounter issues with automated identity verification. These measures reassure the public that they can access necessary services and protections.
While human decision-makers can also make mistakes, having a fallback option provides an essential check on automated systems and ensures that errors do not unfairly impact people.
- In many parts of the country, an automated signature verification system is used during elections to confirm mail-in ballots. These systems are less accurate for some voters, including those with disabilities, hyphenated names, or name changes. A manual signature verification process allows voters and election officials to review and correct mistakes to ensure every vote is counted.
- In Colorado, the unemployment benefits system required applicants to verify their identity using a smartphone, but no human alternative was available. This policy prevented many individuals from accessing benefits.
- A fraud detection system for unemployment insurance mistakenly flagged claims as fraudulent, leading to wage withholding and seized tax returns. Affected individuals had no opportunity to explain or appeal the decision.
- A hospital patient was wrongly denied pain medication because the hospital's software mistakenly merged her medical records with those of her dog. Even after she identified the issue, doctors were hesitant to override the system, leaving her without the necessary treatment.
- A major corporation automated employee performance evaluations and HR functions, resulting in workers being terminated by an algorithm without any opportunity for human review, appeal, or dispute resolution.
Recent posts


