Artificial Intelligence and Trade: A Big Challenge for Inclusivity
Published 17 May 2022
The emergence and proliferation of technologies such as Artificial Intelligence (AI) underpin the transition towards a digital economy in the UK and elsewhere. This brave new digital world comes with big promises—growth, innovation and productivity—at the same time as it gives rise to novel challenges, from various kinds of online harm to anticompetitive practices.
An ubiquitous technology that favours concentration
AI is a general-purpose technology that is deployed across the economy in activities as diverse as marketing, legal advice and robotic process automation, in sectors ranging from healthcare and fintech to precision agriculture. What ties AI to trade policy is the fact that it is used in many (digital) services that are easily traded cross-border1, and that AI development is highly concentrated in firms of unprecedented size and global reach.2 Digital trade provisions in the latest wave of trade agreements—including those signed by the UK—also have implications for how AI is developed and used.
The very nature of AI and the market structure that it fosters means that AI is not the great equaliser in the global economy. Rather, the most capable players are often able to benefit the most and thus AI can widen the gulf between firms and countries at the technological frontier and all others. Trade negotiators will need to work hard to ensure trade policy and commitments in trade agreements mitigate rather than exacerbate the exclusionary tendencies of AI.
How might AI adversely affect inclusivity?
Essentially, AI combines machine learning techniques (hardware and software) with Big Data, i.e. vast datasets which machines need to learn from. Both aspects of AI—the algorithms of machine learning and the reliance on data—can affect inclusivity in the digital economy. This may happen through the way in which AI affects economic outcomes, i.e. who can purchase which kind of good or services at what price.
The algorithmic decision-making of AI fundamentally affects citizens as consumers and data subjects. The predictive power of AI, deployed in areas such as speech and image recognition, demand and price estimation or credit scoring, implies that AI is increasingly involved in determining who can participate in economic activities and under what conditions.3
In particular, AI algorithms can enable price discrimination at a level of sophistication never seen before. Price discrimination will always increase producer surplus at the expense of consumer welfare. More worryingly, AI decisions can discriminate in terms of gender, ethnicity or age, which often reflect biases ingrained in the datasets that had been used to train the respective model. AI is prone to such biases as a result of “algorithmic amplification”; for instance, a Twitter analysis found that its image-cropping algorithm produced a 7% difference from demographic parity across black and white women in favour of white women.
Firms, as opposed to consumers, are also affected differently. AI technology is developed and propagated by only a handful of extremely large companies but its use permeates the entire economy. In the UK, a study from January 2022 shows that 68% of large companies, 34% of medium sized companies and 15% of small companies have adopted at least one AI technology. At the same time, 20% of large firms and 8% of medium-sized firms are currently using four or five AI technologies to assist in their business activities. This uptake profile is not at all surprising, but it does show that AI tilts business opportunities further towards firms with large computing capabilities and a specialised workforce. In like vein, the UK Government’s policy of promoting open data access wherever possible—a commendable stance in principle—thereby de facto benefits large firms with existing data troves the most.
What can policymakers do about it?
There are various policy levers available to address the inclusivity challenges posed by AI although some are not part of the conventional trade toolkit. The complicated truth is that areas of domestic regulation will increasingly assume trade policy salience. For instance, the UK National Data Strategy, the UK National AI Strategy, laws and regulations governing intellectual property protection and copyright, or investment screening policies for foreign investment in data-rich firms all affect how inclusive future digital and AI-enabled trade will be.
Perhaps the most important of these public policy areas is competition. The influence of companies such as Google, Amazon Web Services or Alibaba at the intersection of AI development, data collection, infrastructure and digital market penetration has become very strong. The problems associated with the resultant market structure cannot be solved satisfactorily by individual countries. Yet effective international cooperation to enforce competition policy and consumer protection in the digital economy has not yet progressed beyond the aspirational stage.
Efforts of international policy coordination are not helped by the fact that the world is divided into ‘digital realms’ led by the United States and China; as a case in point, Figure 1 shows just how geographically skewed global private investment is.4 Correspondingly, the US, China and the EU pursue their own regulatory approaches notably with regard to personal data protection, whereas countries such as the UK attempt to negotiate this tripolar world with a cobweb of free trade agreements and—the new kid on the block—digital economy agreements (DEA). In light of the above, the DEA provisions of greatest relevance for trade and inclusivity are not the ones entitled ‘Artificial Intelligence’ (e.g. Art. 8.61-R UK-Singapore DEA) but rather articles on the free flow of data and source code protection.
Another route for policy influence is ongoing work at international standards-setting bodies that aims at defining AI standards on concepts and terminology, data, bias or governance. These efforts will potentially be impactful in the future and thus are important, but they go largely unnoticed and the extent to which all stakeholders can have a say in these processes is far from clear.
It would be a huge achievement if the benefits of AI technologies—enabled by and disseminated through international trade—could be reaped without adverse consequences in terms of inclusivity. Key enabling factors to achieve this are:
- An unprecedented level of coordination across subject areas and government bodies, within and across countries.
- Safeguarding contestable markets, which is vitally important when technology favours—if not forces—'winner-takes-all' outcomes. Inclusivity entails that small enterprises and innovators too can thrive.
- Careful work by trade negotiators to ensure commitments in trade agreements mitigate rather than exacerbate the exclusionary tendencies of AI.
- A transparent and inclusive approach to international standard-setting, for this will define the shared set of rules for the digital economies of the future.
- Outside services trade, advances in AI as measured by patents or trademarks are most prominent in highly tradable goods sectors such as “Computer and electronics” and “Machinery”, see OECD 2022, p.10.
- The latter echoes the general trend of digitisation to abet a platform-type of economy with few large intermediaries, which in either case is the result of exorbitant fixed costs that few can shoulder.
- Economic applications discussed here are only the tip of the iceberg, considering that AI is also deployed in areas ranging from social security and policing to autonomous weapons.
- In 2021, the United States led the world in both total private investment in AI and the number of newly funded AI companies, three and two times higher, respectively, than China, the next country on the ranking (cf. AI Index Report 2022, Chapter 4.2).