India’s AI Summit: showed no through route to balanced people-centred global governance
A defining feature of last week’s AI Summit in India was how crowded it felt, with people, platforms and conflicting agendas. It showcased India’s prowess in the AI race, but did little to advance meaningful international governance.
The biggest constraint was not the numbers in attendance but the buy-in. The two dominant AI powers – the US and China, which together account for the vast majority of AI patents and frontier capability, have not signed up to a shared global governance approach that would impose binding obligations on their firms and national strategies.
So where does this leave the world’s “middle powers”? India, for example, is understandably focused on AI sovereignty, to reduce dependence on US or China Big Tech, as well as building resilience in its own tech sector, and protecting its manufacturing and services base from disruption.
India is also positioning itself as a credible alternative to both China and the US. The summit showcased domestic models running on domestic cloud infrastructure – a potentially more equitable pathway if pursued in cooperation with other countries.
But as often happens, new innovative alternatives get walled-in by the incumbent monopolists. In New Delhi, major firms including Microsoft, Google and OpenAI outlined expanded commitments in India. Microsoft announced it was on pace to inject $50 billion in AI in the Global South by 2030, while large domestic giants like Reliance Industries and Adani announced plans to inject $100 billion into data centres.
These “Big Tech Partnerships” with governments have an uncomfortable resemblance to climate summit dynamics. The language is public benefit, but the underlying distribution of power changes little. In climate policy the criticism is “green washing” of similar deals between governments and the fossil fuel giants that leave the system of lock-in intact. In AI, these deals smack of “sovereign washing”: agreements presented as building national capability, but structured so that Big Tech captures value from digital public infrastructure, data ecosystems and the surrounding markets.
They also create a foothold from which Big Tech can consolidate power and influence. And yet the structural risks and power-based problems of market concentration were not a central theme of India’s AI Summit, and still lack serious debate among policymakers internationally at a global level.
We now need to think about what forms of regulation are best suited to govern this nascent technology. There was also little attention to AI safety compared with previous summits, despite the evidence base set out in this international AI safety report published earlier this year by a series of leading scientists and researchers.
A priority for future summits should be placing meaningful work on AI governance in the hands of scientists, policymakers and civil society actors who — unlike the established powers of today including the US — recognise the need for global governance of AI technology that works for the common good. This is summed up effectively by Anita Gurumurthy and Nandini Chami at IT For Change:
“We are of the view that an internationally binding framework that centres a commons-based approach to the governance of training data – covering the entire lifecycle of AI, including innovation capability…As an innovation built on the collective resource of social data, AI cannot be allowed to be monopolised by a few.”