As AI redefines power and possibility in the 21st century - and authoritarianism ascends worldwide - civil society must harness AI as one of its own tools to protect pluralism and democracy.
Artificial intelligence (AI) and machine learning represent an epochal change to human society - akin to the advent of the printing press, electricity and the internet.
Yet the people and movements dedicated to preserving open, democratic societies and making the future accessible and safe for all — have little grasp of how this tech works, and little to no access to those who do.
AI has great promise in strengthening democracy - enabling safer, freer elections, citizen discernment of truth and policy, exposing corruption, deepening civil and human rights, and more. But we aren't supporting these applications.
Pluro Labs is built to jumpstart this process, addressing critical threats to democracy, as we build lasting, AI-fluent capabilities across our field.
Pluro Labs harnesses AI to deepen and protect democracy, through three activities:
We support technologists to develop and release AI-enabled tools that deepen democracy. These tools protect elections and democratic institutions, root out corruption, counter mis-information, elevate truth, foster civic engagement, combat political extremism, protect marginalized groups, and more.
We partner with civil society practitioners - in both US and abroad - to apply new AI-enabled tools and datasets to address critical challenges to open societies. These actors include human and civil rights advocates, journalists, litigators, organizers, and policy makers in social media, data, AI and democratic governance.
We share what we learn - through briefings, reports, and datasets - to help civil society to safely apply AI to preserve democratic norms in the automation age. Our projects are also selected to create research, data, and evidence that helps advocacy and policy actors working on governance of AI, data, and social platforms.
Sofia is an open-source investigator and data analyst with experience across human rights, democracy protection, and digital investigations for legal accountability. She is skilled in mass social media discovery, identifying deepfakes, and advanced visual analysis techniques.
Will is a social entrepreneur, applied technology executive, and democracy advocate. Will served as founding CEO of Groundswell, a pioneering social enterprise that makes clean energy accessible to disadvantaged communities in the US; it continues to thrive today. He has led product development at an Bay Area AI and emerging tech software firm, before building tech and innovation teams at CARE and Human Rights First.
Will has written and spoken on AI, tech, and democracy at FastCompany, Stanford University, and UC Berkeley. Will has been honored as an Ashoka Fellow, World Economic Forum Global Shaper, Forbes 30 under 30 Entrepreneur, White House Champion of Change, and Stanford d.School Fellow.
Mackenzie is an operations and program management specialist with background in evidentiary research. She supports Pluro Labs' engagement with policy, legal and advocacy stakeholders.
Mackenzie is an experienced open-source investigator in the human rights and democracy field. Over five years, she has built expertise in the digital verification and documentation of gross human rights abuses around the world. She has conducted investigations at Amnesty International USA and Amnesty International's research arm, the International Secretariat. Mackenzie is passionate about using technology and data to drive accountability and inform ethical AI governance.
Janine Graham is an investigative researcher specializing in open-source intelligence (OSINT) techniques to investigate subjects of public interest. Her work has covered areas ranging from war crimes and illicit supply chains to tracking persons of interest for organizations such as UC Berkeley's Human Rights Center, The Associated Press and The Wall Street Journal. As a journalist, she previously worked for CNBC and CNN International.