How is AI reshaping the way we live, create, connect, and evolve?
On June 13, Shared Futures: The AI Forum will bring together the cultural architects of our time to explore.
How is AI reshaping the way we live, create, connect, and evolve?
On June 13, Shared Futures: The AI Forum will bring together the cultural architects of our time to explore.
Findings from our Roundtable on Responsible Data in Product Equity
Workstream
Topic
Share
Earlier this year, Aspen Digital hosted a community roundtable on the urgent need for more responsible data collection practices in product development. This dynamic conversation, which included leading technologists and civil society experts, came on the heels of our Primer on Responsible Data Practices in Product Equity. The session started with a conversation—available below—between Dr. Madihah Akther and Miranda Bogen moderated by Aspen Digital’s Shreya Singh Hernandez, discussing best practices for responsible data collection. Attendees then exchanged insights about operationalizing responsible data use in product development. Below are some key takeaways from the roundtable discussion.
If you find this resource valuable or share it with your team, we would love to hear from you. Your feedback helps us measure and expand the impact of this work. Reach out to us at TechAccountabilityCoalition@aspeninstitute.org.
Achieving fairness in data collection and use while protecting privacy is as complex as it is important. This is especially true for systemically marginalized groups. As Miranda Bogen noted, “Equity and fairness measurements don’t, per se, require suddenly collecting an enormous amount of sensitive data.” At the same time, it is essential to balance the protection of individual identities with the need for comparative measurements across populations.
“Equity and fairness measurements don’t, per se, require suddenly collecting an enormous amount of sensitive data.“
– Miranda Bogen
As Bogen said, “People often get blocked, either by colleagues or by concern over violations of privacy, from engaging in this work. I think that’s where we have to push through and tackle this with nuance.”
Others echoed recommendations from the Primer on Responsible Data Practices in Product Equity, like ensuring secure data environments, protecting anonymity through data dissociation practices, and setting clear intentions and levels of granularity to get the right scale of information to ensure both fair and effective use of data for product development. Participants also exchanged relevant use cases and best practices in aligning goals around responsible data collection and use across legal, engineering, product, design, marketing, and other teams.
Dr. Madihah Akther emphasized, “Not every practice is at the same stage of maturity. We might have to start by thinking about evaluating and mitigating harm and risk to the user as well … Data collection might not actually be the right approach for your team, your product, or your company at this point in time for various reasons.” Attendees agreed that defining the purpose of data collection and aligning team goals and metrics of success are essential. Mitigating user harm early and intentionally while exploring alternative data-gathering methods are necessary steps to aligning across teams with varied practices and objectives for the work.
“Data collection might not actually be the right approach for your team, your product, or your company at this point in time for various reasons.”
– Dr. Madihah Akther
Cross-functional alignment on terminology is crucial. Dr. Madihah Akther provided us with a valuable example: “The word bias has multiple meanings: statistical, legal, etcetera that can lead to confusion about what is being measured, how risky that is from an organizational perspective, or what the implications are of a particular piece of research. Keeping your ear out for those mismatches in understanding is absolutely critical even once you have definitions because nipping those in the bud can help smooth things over, whereas having them fester can lead to days and hours of [additional work and realignment across teams]. Understanding assumptions is important.”
Misunderstandings stem from differing definitions and goals. Solutions require clear governance, common metrics, and policy-aligned language. Miranda Bogen added insight from her experience across teams: “Fostering continuous education with your stakeholders is not just the first thing to do; it’s the thing that you will continue to do throughout your work. Understanding your audience together and making sure that [partnered teams] understand what the implications of this kind of work are for them, what the clear responsibilities of their roles are, and how it is related to their jobs. Continuous education is something that needs to happen within all of the teams mentioned: legal, product, engineering, AI teams, design, research, etc. Everybody has a stake in the work.”
“Fostering continuous education with your stakeholders is not just the first thing to do; it’s the thing that you will continue to do throughout your work.“
– Miranda Bogen
While shifts in the current US administration’s approach to diversity, equity, and inclusion have resulted in flags on hundreds of terms central to this work like “equity” or “inclusion”, some states are offering an alternative approach to safeguard the work. For instance, California is considering legislation regulating the development, testing, and deployment of “automated decision systems” and placing more protections in place. This shifting landscape of risk in the work of product development and product equity pose a challenge to many practitioners who have been working to use data equitably to create products that allow for previously excluded people to not only use but fully enjoy their products – often a benefit to both a company’s bottom line and innovation in society at large.
Evolving and diverging regulations also impact responsible data collection use. Industries with more historical regulations around sensitive data (e.g. health, finance, housing) provide interesting case studies and, at times, best practices around processes and protections. Multinational companies will often adopt the most protective practices from countries with more restrictive regulations as a means to simplify their data processes. As a result, the EU can have disproportionate influence over tech practices. The lessons learned from more regulated sectors, combined with the direction set by influential frameworks like the EU AI Act, will be instrumental in forging a path toward more responsible data utilization across industries.
Implementing privacy and responsibility by design, limiting data collection to only what is necessary, and ensuring responsible data retention are crucial. Product teams must align on data processes and shared goals, adapt language from policy to practice, enhance data infrastructure, and establish governance to achieve their objectives of responsible data use. Lastly, in a shifting legal and labor landscape, keeping abreast of the latest shifts from local to international data policy is becoming increasingly important.
We invite you to collaborate with the Product Equity Working Group to help shape industry-wide best practices. Through our collective efforts, we can accelerate change and work toward a future where more people have access and can fully benefit from all that tech has to offer.
If you are interested in joining the Product Equity Working Group, please email us at TechAccountabilityCoalition@AspenInstitute.org.
Dr. Madidah Akhter
Co-Lead of the Product Equity Working Group
Dr. Madihah Akhter is a product inclusion researcher and strategist. She spends her days finding creative ways to embed equity into product development, center under-served customer needs and translate data-driven insights into action.
Madihah holds a PhD from Stanford University and an MA from Tufts University. She has a background in academic research, diversity program management in higher education, and product equity consulting with tech clients, where she focused on building emerging product equity practices and conducting foundational research. When she’s not helping teams build equitable product experiences, Madihah can be found hiking in the Santa Cruz mountains.
Miranda Bogen
Director of AI Governance Lab, Center for Democracy & Technology
Miranda Bogen is the founding Director of CDT’s AI Governance Lab, where she works to develop and promote adoption of robust, technically-informed solutions for the effective regulation and governance of AI systems.
An AI policy expert and responsible AI practitioner, Miranda has led advocacy and applied work around AI accountability across both industry and civil society. She most recently guided strategy and implementation of responsible AI practices at Meta, including driving large-scale efforts to measure and mitigate bias in AI-powered products and building out company-wide governance practices. Miranda previously worked as senior policy analyst at Upturn, where she conducted foundational research at the intersection of machine learning and civil rights, and served as co-chair of the Fairness, Transparency, and Accountability Working Group at the Partnership on AI. Her writing, analysis, and work has been featured in media including the Harvard Business Review, NPR, The Atlantic, Wired, Last Week Tonight, and more.
Miranda holds a master’s degree from The Fletcher School at Tufts University with a focus on international technology policy, and graduated summa cum laude and Phi Beta Kappa from UCLA with degrees in Political Science and Middle Eastern & North African Studies.
As hard-won progress on representation have started to unravel, democracy and the future of AI are inextricably tied together.
Rebuild trust in tech with Sarell’s actionable framework. Discover how Product Equity principles can create more inclusive, transparent platforms.