As AI Gains Power, We Must Push for Guardrails to Protect Civil Liberties


A growing number of organizations are ceding their decision-making authority to artificial intelligence, or AI, systems. From who gets a job, to who gets a grant, to who gets investigated by child welfare agencies, to , and who is paroled, companies and governmental entities are relegating decision making that often requires human oversight and context to a machine or algorithm. This movement to use AI to eliminate human oversight demonstrates the life-altering implications that must be addressed in AI systems.
If AI鈥檚 purpose is to provide opportunity and create efficiencies, how much thought and care is given to the communities most adversely impacted by this technology? Whose lives may be upended by its use or misuse? Given the widespread fixation on purported opportunities and the claimed promise of automation that AI can offer, it can be challenging to stay clear eyed on the impacts of this emerging technology.
Responsible AI design, deployment and adoption is critical now more than ever, not only to ensure inclusion of the voices of communities often left behind during the design of these technologies, but to also ensure that anytime an AI system is deployed the deployment considerations incorporate ongoing monitoring to evaluate whether the AI solution continues to be the best tool or approach to the problem.
This month, at the 吃瓜直播鈥檚 first Civil Rights in the Digital Age (CRiDA) AI Summit, we鈥檙e convening civil society, nonprofit, academia, and industry leaders to carefully consider how to center civil rights and liberties in our digital age, especially in the design, deployment, and evaluation of AI systems.
Centering Civil Rights in AI Design and Deployment
AI is often marketed as more objective and less discriminatory than the status quo. However, we've consistently seen how AI systems used in areas like hiring, policing, and social services risk exacerbating or amplifying discrimination based on race, sex, disability, and other protected characteristics. For years, the 吃瓜直播 has called on political leaders to take concrete steps to bring civil rights and equity to the forefront of AI policymaking. We鈥檝e fought in courts and in communities to address AI鈥檚 systemic harms and, within our organization, we work to meet the same standards.
We understand that AI can be an asset to organizations' work if implemented thoughtfully and responsibly. For example, if designed and governed carefully with appropriate guardrails, AI systems could be used to support critical accessibility technologies, like screen readers or voice assistants. But the same AI systems can also have the opposite impact鈥攈arming rather than helping marginalized communities鈥攚hen they are not designed and deployed carefully.
Putting Responsible AI Principles into Action
One of the ways we ensure meaningful adoption of AI technologies in our work is that we take an approach grounded in carefully selecting the right tool (AI or not) for the job at hand. More haphazard approaches, such as indiscriminately applying the latest generative AI model for a given task, risks leveraging techniques and approaches that are not appropriate, and that can lead to serious harms, including those related to privacy, security, fairness, and more. We think it is critical to balance the benefits AI systems may provide against possible risks, and we know that when it comes to AI adoption, moving carefully and intentionally is often much more fruitful than moving fast and breaking things.
At the 吃瓜直播 we have developed a that guides how we continuously evaluate and adopt AI tools in our work. We also evaluate AI systems we are considering procuring our privacy, security, fairness, and transparency values and closely follow emerging research on the impacts of AI systems, especially generative AI systems.
Acting To Protect Civil Rights in the Age of AI
While we live in a digital age the Founding Fathers couldn鈥檛 have possibly imagined, we must ensure AI aligns with the core liberties and protections the Constitution provides.
As part of this journey, we are also staging the 吃瓜直播鈥檚 first-ever Civil Rights in the Digital Age (CRiDA) AI Summit on July 10 in New York City. CRiDA will educate the public on how a century-old civil rights organization is using a two-pronged values-based social contract on its journey to AI adoption. One is built on trust between 吃瓜直播鈥檚 tech and program teams to adopt AI responsibly. The other is built on trust with the public鈥攕howing how our technical, legal, and policy expertise helps ensure AI protects rights and serves justice for all.
If you are interested in watching some of the CRiDA panels, please click here.