Thread
We applaud the Biden administration's release of the Blueprint for an AI Bill of Rights. It’s important for many reasons, though its impact will ultimately be measured by how governments and companies put the principles into practice. A few thoughts ⤵.
www.whitehouse.gov/ostp/ai-bill-of-rights/
The Blueprint for an AI Bill of Rights is for "everyone who interacts daily with these technologies—and every person whose life has been altered by an unaccountable algorithm."
Out of the gate, the White House affirms what many communities and advocates have raised over many years: “Among the great challenges posed to democracy today is the use of technology, data, and automated systems in ways that threaten the rights of the American public.”
It’s refreshing to see the White House challenge the dominant narratives around AI and innovation. The AI Bill of Rights lays bare all of the ways that technology — left unchecked — is driving a vast range of unfair, discriminatory, and harmful outcomes.
At the core, the AI Bill of Rights addresses fundamental issues of racial, economic, and social justice. It recognizes that in those fights today, we can no longer afford to ignore tech’s growing role. Innovation “must not come at the price of civil rights or democratic values.”
This kind of critical analysis of tech needs to permeate across the policymaking spectrum — from federal agencies to city councils, and across issues like housing, employment, policing, health, education, financial services … you name it. These aren’t just “tech” issues anymore.
The AI Bill of Rights applies to tenant screening, predictive policing, hiring procedures, clinical diagnostics, student admissions and more, where “too often, these tools are used to limit our opportunities and prevent our access to critical resources or services.”
The AI Bill of Rights urges stronger rules and safeguards across the board. But in some cases, policymakers need to also “include the possibility of not deploying the system or removing a system from use.” In short: none of this is inevitable; these are all choices that we make.
For example, consistent with these principles, “continuous surveillance and monitoring should not be used in education, work, housing, or in other contexts where the use of such surveillance technologies is likely to limit rights, opportunities, or access.”

A strong statement.
The AI Bill of Rights also calls for a new era where automated systems are continuously tested for racial, gender, and other demographic disparities, while moving toward less discriminatory alternatives.

This is an important idea that deserves significantly more investment.
For all the good that’s in the document, we’re concerned about the legal disclaimer up front that appears to suggest that law enforcement and national security should operate under a different, softer set of rules and principles.
But let’s be very clear: when it comes to law enforcement and national security, people’s rights, freedoms, and livelihoods are often in most immediate danger.

It’s precisely in these contexts that a Bill of Rights is especially needed, and needed with full force.
As they say, the proof is in the pudding. We’ll be watching to see how these principles translate into actual policy changes — particularly new legislative, regulatory, and enforcement efforts — developed in close consultation with communities impacted by these systems.
We’re glad to see some federal agencies taking more steps in this direction, including new initiatives by HUD, HHS, and others announced today.

But because technology moves fast and the problems run deep, this must be the beginning of a sustained effort. www.whitehouse.gov/ostp/news-updates/2022/10/04/fact-sheet-biden-harris-administration-announces-key-...
Finally, we owe @AlondraNelson46 a huge debt of gratitude for her unwavering leadership on these issues, along with @AmbRice46, @WHOSTP, and other WH staff for their hard work and persistence in making the AI Bill of Rights possible.

/end
Mentions
See All