Connecting the dots between the cybersecurity challenges of today and the topics that matter to you.
The 9th annual Aspen Cyber Summit made its debut in Washington, DC, on September 18. Watch the recording.
Connecting the dots between the cybersecurity challenges of today and the topics that matter to you.
The 9th annual Aspen Cyber Summit made its debut in Washington, DC, on September 18. Watch the recording.
Last month, Aspen Digital brought together over 30 leaders from a diverse range of backgrounds including tech, government, elections administration, and civil rights for a frank discussion about the impact of technology on historically marginalized communities, and their access to vote. Entitled, “Algorithmic Suppression: Democracy, AI, and the Future of Civil Rights,” our roundtable examined new forms of democratic suppression enabled by AI and other technologies.
The stakes for this year’s elections couldn’t be higher. Trust in basic democratic institutions, particularly among communities of color, has been systematically eroded by malicious actors intent on sowing discord and confusion. Rising hateful rhetoric online increasingly translates directly into real world violence. While attacks on such communities are not new, 2024 has ushered in a unique set of challenges. Since the last election, social media companies have reduced the ranks of content moderators, some academic research institutions that study malicious online behaviors have retreated under pressure, and AI-driven applications make it easier and cheaper for bad actors to fan the flames of division and distrust.
In order to future-proof our democracy, tech leaders will need to engage more directly with the communities their products impact. The roundtable reflected that spirit, with some sessions focused on AI mythbusting and others centering the communities’ perspectives on threats that often go overlooked.
Participants worked through a variety of potential scenarios that might play out between now and Election Day and the subsequent certification process. This exercise and the discussions throughout the day surfaced a number of recommendations and observations about the upcoming 2024 elections and how communities can achieve long-term resilience.
As we plan for AI and the future of civil rights, several key insights from the group will prove valuable:
Advances in generative AI since the last election means that manufacturing false content at scale and targeting it with precision is faster, cheaper, and more accessible to bad actors than ever before. The risk remains high in the lead up to November and in the immediate aftermath when time and resources for both tech companies and government will be stretched thin.
Participants predicted that AI might be used to target certain communities with disinformation about candidates, or the means and manner of voting. This could come in the form of micro-targeted AI-generated images, videos, in-language message groups, and voice calls.
Based on past experiences with malicious actors targeting communities of color, leaders also voiced concern that deep fakes may be used to spread false but alarming content (e.g., riots at a polling place), or to show election workers appearing to engage in voter fraud.
Participants expressed deep concern about the rapid decline of local journalism in many parts of the country, and the rise in fake local news sites. Without “watchdogs” like journalists working to report on and fact-check community-specific matters such as local and state races, mis- and disinformation could spread more rapidly.
There are particular implications for communities where non-English communication is prevalent, as they are often less connected to the broader media ecosystem and are, therefore, prime targets for bad actors looking to spread false information. AI-generated translation makes this form of manipulation faster and cheaper.
Challenges driven by AI have the potential to be worse at the local and state level because of decreased public engagement in down-ballot races This may be exacerbated during non-presidential years, when public awareness of elections and candidates is more limited
Leaders in every sector asked for more coordination among communities, election officials, and tech companies to help them mitigate potential threats to their democratic and civil rights.
Community leaders expressed a strong belief that tech companies have a responsibility to create clear and useful processes for engagement with community leaders and election officials. They expressed that the process of engaging with tech companies is currently too opaque and lacks consistency within individual companies and across the industry
Community leaders are also calling on technology companies to better prepare their communities with information about how to recognize targeted AI-driven mis- and disinformation and how best to report it.
Tech executives agreed with community and election leaders on the need to streamline communication. Oftentimes tech leaders will give community representatives their direct contact information to help cut through the ambiguous process but acknowledge that this is not a scalable solution.
For the long-term, participants emphasized the need for innovation to be “done by us” not “done to us,” especially when it comes to designing and testing new products.
Leaders across the board emphasized how targeted engagement with communities can help build trust in the voting process, especially when prefaced by an acknowledgment of past harms. Local members of the community are best positioned to educate technology companies on the cultural specificities of under-represented groups in order to better equip platforms to address hyper-targeted attacks.
Participants stressed that ethical guardrails for AI products are not enough to predict or mitigate harm to a community if they are not also represented in the design or testing stages. Given the recent deep cuts to DEI commitments across the industry, this issue has become particularly acute. Community leaders discussed the idea of “bridging” roles within technology companies – product development experts from underserved communities who have first-hand knowledge about how their specific communities interact with tech products.
There was also commentary that the suppression of democratic rights is not limited solely to elections, The group shared that without ongoing engagement by tech companies, AI-driven harms against communities will only continue to snowball.
Throughout the AI and the future of civil rights roundtable, the need for better communication, collaboration, and centering of directly impacted communities emerged as a critical throughline. While this roundtable took place in the context of the 2024 election, this group of technology, community, government, and elections leaders universally understood that these issues will not be unique to this November. Technology will only become more sophisticated, and the potential for bad actors to use AI-driven technology to influence elections and modernize suppression will only increase in 2026, 2028, and beyond.
To best serve their own communities and ensure that everyone has access to their democratic rights, social justice and civil rights leaders need a consistent flow of information and engagement on new and emerging technology and how rising trends may impact their democratic participation. Likewise, tech companies must recognize the importance of community feedback and co-design at every stage of the product design and development process. By building a lasting bridge of trust, we can ensure that every community enjoys their full democratic and civil rights.
At the start of 2024, Aspen Digital began to explore the challenges older adults face in a world becoming far more technologically advanced.
The current moment has the potential to revolutionize disability inclusion and belonging in general in the workplace.