Artificial intelligence can make our societies more equal. Here’s how
Unconscious biases built into algorithms are introducing risks of discrimination in decision making. How can we manage these risks?
Brandie Nonnecke, PhD is Founding Director of the CITRIS Policy Lab (https://citrispolicylab.org/) and Co-Director of the CITRIS Tech for Social Good Program at UC Berkeley (https://berkeley.techsocialgood.org/). She is a Fellow at the World Economic Forum where she serves on the Council on the Future of the Digital Economy and Society. Her research has been featured in BBC News, MIT Technology Review, PC Mag, Buzzfeed News, Mashable, and Stanford Social Innovation Review.
Brandie has expertise in information and communication technology (ICT) policy and internet governance. She studies human rights at the intersection of law, policy, and emerging technologies with her current work focusing on issues of fairness and accountability in AI. She has published research on algorithmic-based decision-making for public service provision in the urban context; how AI can enhance and augment human labor, especially for aging populations and individuals with disabilities; and outlined recommendations for how to better ensure applications of AI support equity and fairness.
She received the 2015 IEEE Global Humanitarian Tech Best Paper Award for development of an open source platform that applies statistical models and collaborative filtering to streamline public feedback on complex societal issues. Brandie was named a 2018 RightsCon Young Leader in Human Rights in Tech and received the 2019 Emerging Scholar Award at the 15th Intl. Common Ground Conference on Technology, Knowledge, and Society. Her op-eds and research publications are available at https://nonnecke.com/