2023 Conference on International Cyber Security | 7-8 November 2023
Register now

< Return to program overview

Panel 4

|

Big Tech and Democracy

Isabella Wilkinson

Isabella Wilkinson is a Research Fellow in the Digital Society Programme at Chatham House. Formerly, she was a Research Associate in the International Security Programme and part of the Editorial Team for the Journal of Cyber Policy. Wilkinson specialises in the geopolitics of international cyber and technology governance, the online information environment, and advancing equity, diversity and inclusivity in technology and cyberspace. Wilkinson co-leads Women in International Security UK.

LinkedIn

Abstract

Keynote

Defending democracy: understanding technology company action and coordination in support of democratic integrity, from promises to pledges

In 2024, facing the world’s biggest electoral megacycle, technology companies have deepened and forged existing and newfound roles and responsibilities for safeguarding democratic processes. Many major technology companies have published or updated their approaches to mitigating and addressing mis- and disinformation spread on their platforms or generated and amplified by their technology. These approaches range from partnerships to promoting principles and adherence with existing agreements or codes.

While industry-led agreements and principles on cyberspace and technology are not a new phenomenon, never before have technology companies taken such proactive and coordinated action directly in support of electoral integrity.

This paper seeks to understand what motivates technology companies to pioneer, support and coordinate voluntary, non-binding commitments to uphold democracy and specifically protect electoral integrity. What are the incentives and pressures underlying these actions, and how are they operationalized through independent and coordinated action?

To explore these two questions, this paper focuses primarily on the Tech Accord to Combat Deceptive Use of AI in 2024 Elections, a voluntary pledge signed by 21 companies in February 2024. It considers four companies, positioned in different parts of the creation and dissemination of deceptive content ecosystem: Adobe, Anthropic, Microsoft and TikTok. It draws on the textual analysis of company public statements and previous joint statements and declarations.

Designed for both a policy and academic audience, one of the paper’s main contributions is proposing an adaptive typology of inter-linked incentives and pressures driving independent and coordinated action; these are regulatory and political; commercial and reputational; security and operational; and normative. It argues that variable alignments – as opposed to full compatibility – in incentives and pressures drives strategic coordination among market competitors, some of whom can be classified as proactive coordinators seeking to strategically self-regulate.

The paper concludes by revisiting the implications of technology company public positioning on protecting electoral integrity for the shifting role(s) of and expectation(s) for private actors in mitigating technology-based and -enabled threats.