Blog post
The Atlantic Declaration Action Plan (ADAPT): Artificial Intelligence and data bridge
Published 22 June 2023
Key Points
- The UK Government has been striving to navigate a distinct regulatory digital regime away from the EU’s precautionary thinking, and in some areas, the UK already has regulatory convergence with the US.
- The UK Government has indicated a ‘light-touch and pro-innovation’ regulatory regime for the digital economy while declaring support for the highest levels of privacy, patient safety and responsible use of data in the health system objectives which are contradictory and difficult to reconcile.
- The Declaration stressed a ‘comprehensive approach’ to Artificial Intelligence (AI) risks and opportunities, and that comprehensive engagement with stakeholders including companies, research, civil society, and allies and partners, is essential for responsible AI innovation. However, so far, the inclusion of civil society, users of technologies and consumers in the setting of laws and standards has not been satisfactory.
- The Declaration includes establishing a US-UK Data Bridge to facilitate data flows but it is unclear how the UK will deal with the tensions between privacy, state surveillance and the economic need for data flows.
- Opening a ‘data bridge’ with the US without addressing the issues that block EU transfers could jeopardise data flows with the EU.
- With decisions on whether to transfer data to another country now made by ministers through secondary legislation, if the Data Protection and Digital Information Bill (No.2) goes ahead, the lower criteria could be used to create ‘data bridges’ to jurisdictions that may not be permitted under current requirements.
- The Declaration also promises that the UK and US will coordinate to further promote trust in the digital economy, including through support for the Global Cross-Border Privacy Rules, but this certification provides a lower level of consumer guarantees than the current GDPR-based UK framework and is expensive and time consuming when compared to adequacy, particularly for Small and Medium-sized Enterprises.
In recent years, in pursuing aims of market competitiveness, risk management, and national security, countries have been adopting national strategies for ‘critical technologies’, such as artificial intelligence (AI), semiconductor chips, and quantum computing. Asserting digital sovereignty has been the primary goal in the national AI regulation and governance agenda. States have been developing their AI strategies for their industry’s competitive advantage in the global supply chain. However, divergence in national or regional AI regulations leads to ‘digital fragmentation’ which poses challenges to interoperability (the ability of computer systems or software to exchange and make use of information) and free data flow that is critical for digital innovation.
To redress the fragmentation problem, governments have been working with like-minded countries and entering regional or bilateral trade agreements for technological alliance. For example, the Atlantic Declaration Action Plan (ADAPT): A twenty-first century US-UK Economic Partnership, jointly announced by the UK and US governments on 8th June 2023, strengthens their partnership in leading critical technology innovation. This post will focus on two aspects of digital collaboration mentioned in the Atlantic Declaration: Artificial Intelligence (AI) and data bridge.
Artificial Intelligence (AI)
After leaving the EU, the UK Government has been striving to navigate a distinct regulatory regime moving away from the EU’s precautionary thinking towards a ‘pro-innovation approach’, aiming at providing the highest levels of privacy, patient safety and responsible use of data in the health system. However, many of these objectives are contradictory and difficult to reconcile. EU technology sovereignty concerns interests of fundamental importance: including the safety of the economy and society, public and national security and the protection of European values and rights, as seen in the proposed EU AI Act. In diverging from the EU regulatory regime, the UK’s pro-innovation approach is gradually approximating to the US model. We can see aspects of regulatory alignment with the US in the case of AI Machine Learning-enabled medical devices.
In March 2023, the UK Government published the AI White Paper, adopting the pro-innovation approach to regulating AI which is based on proportionate interpretation of five principles: safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; contestability and redress. The Government hopes that this principle-based and proportionate approach will enable regulation to keep pace with a fast-evolving technology. As the five principles remain vague, technical standards and working guidelines will need to be developed to demonstrate how the principles are operationalised on the ground. The making of relevant laws, standards and guidelines will need to engage a wide range of actors in the AI life cycle in order to avoid bias and discrimination.
Notably, the Atlantic Declaration stressed a ‘comprehensive approach’ to AI risks and opportunities, and that comprehensive engagement with stakeholders including companies, research, civil society, and allies and partners, is essential for responsible AI innovation. However, so far, the inclusion of civil society, users of technologies and consumers in the setting of laws and standards has not been satisfactory. This mirrors reported efforts by the US and UK governments to exclude civil society from the development of the new Convention on Artificial Intelligence of the Council of Europe.
Data bridge
‘Data Bridge’ is a new term created by the UK. The previous similar legal arrangement used by both the EU and the UK as a member – called Privacy Shield - was declared unlawful by the Court of Justice of the EU. The underlying tensions between privacy, state surveillance and the economic need for data flows remain, and it is unclear how the UK will tackle these. The text itself is quite vague, with a commitment ‘in principle to establish a U.S-UK Data Bridge to facilitate data flows between our countries while ensuring strong and effective privacy protection’.
The UK Parliament is currently discussing the controversial Data Protection and Digital Information Bill (No.2). The draft Bill proposes changing the criteria the UK will use to establish the adequacy of data protection in other jurisdictions to which UK data may be transferred (found in Article 46 UK GDPR) from ‘essentially equivalent’ protections, to the lower bar of requiring that protection standards are not materially lower than in the UK. The new bill is designed to promote the interests of businesses over consumer protections, and we can expect that ministers will use this criterion to create ‘data bridges’ to jurisdictions that may not be allowed under the current requirements.
The Atlantic Declaration also promises that the UK and US will ‘coordinate to further promote trust in the digital economy, including through support for the Global Cross-Border Privacy Rules (CBPR) Forum’. The CBPR regime is based on private business commitments and certification by commercial entities, and provides a lower level of consumer guarantees than the GDPR UK framework, particularly for Small and Medium-sized Enterprises. The Global CBPR is a new US-led framework based on the Asia-Pacific Economic Cooperation organization (APEC-CBPR) privacy rules, designed to provide an alternative to GDPR. The UK became an associate member of the CBPR in April 2023.
We should situate these commitments in the wider context of global digital governance, where the divergence in approaches – EU, US, China - leads to fragmentation, which hampers data sharing and open innovation. The UK strategy of simply creating ‘bridges’ across regimes with different regulatory approaches may not be enough for building a global data hub and could even have a negative impact on building societal trust in the uptake of digital technologies in the UK.
Conclusion
Atlantic Declaration confirms the UK's drift away from the EU regime to position itself as an AI power and global data hub. We can also see the Declaration provided a legitimate foundation for technological alliance with the US regulatory regime; however, the vague terminologies remain to be defined and that the proposed ‘comprehensive approach’ would need to be implemented in order to embody the principles of AI regulation: inclusivity, responsibility, accountability, and transparency. Engaging with the underlying reasons why some governments and ordinary people wish to limit data transfers – privacy, disinformation, economic imbalances, etc. – may be a more sustainable approach than data bridges or private certification. This requires a coordinated multilateral effort, beyond the Atlantic Declaration. While the UK has entered into new trade agreements with other economies, regulatory interoperability between jurisdictions remains a major issue. Moving ahead, regulatory coherence and regulatory collaboration would continue to be implemented with like-minded trade partners.
Author Profiles

Phoebe Li
