Convergence or Coincidence? Why National AI Strategies Sound the Same
London School of Economics and Political Science
If every country is independently developing its own national AI strategy, why do they all sound so similar?
A striking feature of the AI Folio corpus is the degree to which national AI strategies converge on the same themes, the same structure, and often the same language. "Human-centric." "Ethics-based." "Innovation-friendly." "Trustworthy AI." These phrases appear across strategies from jurisdictions as different as Singapore, Uruguay, Finland, and Egypt. The convergence is not accidental — it is the product of identifiable mechanisms of policy transfer and normative diffusion.
The clearest evidence comes from framework citations. ISO standards appear in 78 of 104 strategies; OECD Principles in 59; UNESCO's Recommendation in 35. When governments cite the same frameworks, they are not merely acknowledging shared references — they are importing shared definitions, shared problem framings, and shared vocabularies. The result is a corpus in which the surface diversity of national approaches conceals a deeper structural homogeneity.
The 15 jurisdictions that have published multiple strategies over time show a consistent pattern: later strategies converge more strongly toward international norms than earlier ones. Germany's four strategies, China's three, and the European Union's three all show increasing alignment with dominant international frameworks over time — even where the substantive policy choices diverge. The language of AI governance is globalising faster than AI governance itself.
The convergence has a geography. European strategies form the most coherent similarity cluster in the corpus, which is expected given shared regulatory context. More surprising is the degree of textual similarity among strategies from countries with no formal institutional relationship — suggesting that policy diffusion is operating through informal channels: international conferences, consultancy networks, seconded officials, and the circulation of a relatively small number of influential policy documents.
The convergence finding has an uncomfortable implication. A corpus of documents that all agree AI requires human-centric, ethics-based, innovation-friendly governance may be producing the appearance of global consensus while leaving the hardest governance questions systematically underaddressed. Surveillance, autonomous weapons, labour displacement, and the concentration of AI capabilities in a small number of private actors appear in many strategies — but rarely as the primary frame. They are acknowledged, then set aside in favour of the optimistic consensus language that international frameworks reward.
Convergence on language does not mean convergence on commitment. The same words can mean different things in different legal and political contexts. The risk is that the appearance of global consensus on AI governance obscures the reality of highly divergent approaches to its most consequential applications.
Chart: Heatmap — top 20 most-cited terms across the corpus, grouped by theme cluster (rights/safety, innovation/growth, international frameworks, implementation). Countries on one axis, term clusters on the other. Colour intensity = frequency normalised by document length.
Data: AI Folio Corpus Metrics, text analysis of 104 national AI strategy documents.
Table 1 — Framework citation overview
Shared citation of these frameworks contributes to convergent language across strategies.
| Framework / forum | Strategies citing |
|---|---|
| ISO | 78 |
| OECD | 59 |
| NATO | 42 |
| UNESCO | 35 |
| United Nations | 30 |
| G20 | 22 |
| GPAI | 18 |
| G7 | 13 |
| Council of Europe | 13 |
| Global Partnership on AI | 9 |
| EU AI Act | 8 |
| Bletchley | 3 |
| Hiroshima | 2 |