Promises or Plans? Commitment Language in National AI Strategies
University of Edinburgh
National AI strategies are political documents as much as technical ones. They signal ambition, reassure publics, and set expectations for industry. But how much of what governments write actually commits them to anything?
Using keyword frequency analysis across the full text of 104 AI strategy documents, we measured the ratio of strong commitment language ("will", "shall", "must", "mandate", "require") against hedging language ("may", "consider", "explore", "aim to", "seek to"). The result is a Commitment Score for each document, ranging from 0 (entirely aspirational) to 100 (entirely directive).
The findings reveal striking variation — but the headline result is more encouraging than expected. Seventy-three of the 104 strategies analysed score in the "Committed" band, with a mean commitment score of 73.8 across the corpus. Only two strategies score in the "Aspirational" band, and 29 are classified as "Moderate."
The most committed strategies — Dominican Republic, Hungary, Indonesia, and Vietnam all score 100, with Finland close behind at 96.7 — are characterised by directive language and named implementation responsibilities. At the other end, Romania and Colombia score 0, producing documents that read as vision statements rather than governance plans. South Africa (44.4), Uzbekistan, and Uruguay (both 50) cluster in the moderate range.
The most committed documents share a common feature: they name specific institutions responsible for implementation and attach deadlines to commitments. The least committed share a different feature: they are rich in aspiration but thin on accountability. A strategy that "aims to explore the possibility of establishing a framework" creates no obligation and sets no baseline against which progress can be measured.
The regional pattern is suggestive. The top performers span Latin America, Southeast Asia, and Eastern Europe — a distribution that cuts across conventional assumptions about which regions lead on AI governance. High commitment scores do not necessarily correlate with implementation capacity; they measure what governments promise, not what they deliver. That gap is a question for future research.
This matters because commitment language predicts regulatory trajectory. As AI governance moves from strategy to regulation, the documents governments published in 2019 and 2020 will be the baseline against which their later choices are judged. Countries that made strong commitments will find it harder to walk them back. Countries that hedged everything have left themselves maximum flexibility — which may or may not be used.
Chart: Horizontal bar chart — top 5 and bottom 5 countries by commitment score, with the corpus mean (73.8) shown as a reference line.
Data: AI Folio Corpus Metrics, NLP analysis of 104 national AI strategy documents. Commitment score = strong commitment tokens / (strong + hedge tokens) × 100.
Figure 1 — Top and bottom 5 countries by commitment score
Corpus mean = 73.8. Top 5 (blue), bottom 5 (grey).
Table 1 — Commitment label distribution
| Label | Count |
|---|---|
| Committed | 73 |
| Moderate | 29 |
| Aspirational | 2 |