‘Putin said some years ago that whoever controls AI controls the world. So I imagine that they’re working very hard.’
Professor Hinton
The headline for BBC Economics Editor Faisal Islam’s discussion with Professor Hinton, regarded as the ‘godfather of Artificial Intelligence’, was reported in terms of his support for Universal Basic Income (UBI) due to its potentially significant effect on the labour market, which could result in many people losing their current jobs. UBI is a concept with which we disagree, since it would consign so many people to heavyweight welfare subservience and leave the wealth generated by AI to be enjoyed, or be controlled, by the few.
However, Professor Hinton’s most serious warnings referred to the threat to international stability, as the above quotation shows. It's therefore necessary to address not only the potential economic outcomes but also the strategic consequences of this swiftly advancing technology. Last week’s publication of the interim report, ‘International Scientific Report on the Safety of Advanced AI’ in preparation for a conference opening in South Korea on Tuesday 21 May will no doubt receive more attention over the coming days; its executive summary concludes that ‘a wide range of general-purpose AI trajectories are possible, and much will depend on how societies and governments act’.
The recent European tour undertaken by Xi Jinping was clear evidence of the real sense of autocratic purpose as he visited France, Serbia and Hungary before returning to meet Vladimir Putin in Beijing. The programme was clearly designed to boost China's influence in both economic and strategic respects, and the American decision to raise tariffs on Chinese imports coincided with it, accentuating the rising tensions between the two superpowers. I recall presentations given by Capital Economics fifteen years ago warning of the danger of huge trade imbalances between the United States and China, and now those chickens are coming home to roost.
Seen in this context, Professor Hinton's warnings call for a radically new approach to Artificial Intelligence, one in which both the polarisation of wealth and excess intermediation are addressed.
It's now nearly four years since we set out clear proposals for the democratisation of technological wealth creation in our commentary ‘Dividends please, not Space Rockets’. Our focus was then purely economic — the momentous significance of Artificial Intelligence development had yet to emerge: for example, the conversational language translation of the latest version in ChatGPT closely mirrors the Pentecost feats of the apostles described in the second chapter of the Acts of the Apostles in the Bible, when their listeners asked, ‘how is it that each of us hears them in our native languages?’
Our more recent commentary ‘Copyright, & Ownership for All’ started to address the need to encompass a wider application as the new technology becomes increasingly embedded in everyday life. It’s interesting to note that the potential reasons to overhaul 300 year-old copyright legislation are also touched on in the International Scientific Report, ‘An unclear copyright regime disincentivises general-purpose AI developers from declaring what data they use and makes it unclear what protections are afforded to creators whose work is used without their consent to train general-purpose AI models’.
‘Stock for Data’ is one of the central pillars of our drive to achieve a more egalitarian form of capitalism: the other is inter-generational rebalancing. This element of our Cambridge-based research is led by Dr. Heloise Greeff, who has developed a clear plan of approach intended to lead to the establishment of pilot operations with at least one of the tech giants.
Disintermediation is at the heart of this initiative: it not only seeks to provide a share in wealth creation in terms of capital gain and dividends (a much more participative approach than UBI), but also to involve individual stockholders in a share of the governance of these businesses. It therefore sees concepts like anti-trust regulation being more of a backstop than a first course of action, and with that regulatory oversight being vested in the United Nations.
However, it is the new dimension of strategic risk being heightened by Artificial Intelligence which introduces a real urgency.
One of the key aspects of ‘Stock for Data’ is that it should be introduced globally in order to deliver a fair outcome for all whose creativity is harvested by technology. This global approach would not be limited to western democracies, but would apply to the individual citizens of all nations, including China. While they may resist the impact of western-style democracy, they are unlikely to deny their citizens the opportunity to share in wealth creation.
Therefore, as the momentum for disintermediated participation grows it will provide a platform for convergence for all people, in contrast to the retreat from globalisation which we are currently experiencing across developed nations — including the United States. Over a period of time this will lead to a safer world where Artificial Intelligence develops for the good of humanity as opposed to becoming one of its major threats.
I am hoping to have a conversation with Professor Hinton to seek his views on this alternative: he clearly has a real determination to address the risks of Artificial Intelligence, and his detailed knowledge of the giant tech industry would help greatly in moving the ‘Stock for Data’ proposal forward.
In particular, it's worth noting that that issue of stock dilution would pale into insignificance when compared to the real threat to international stability and widespread poverty to which Professor Hinton refers. Disintermediated participation may sound complicated, but it can be delivered if the resolve is there to address these risks.
Gavin Oldham OBE
Share Radio
