An attendee looks at an artificial intelligence service character displayed on a screen in Tokyo, Japan. April 4, 2018. (Kiyoshi Ota/Bloomberg)

Anthony Giddens is the former director of the London School of Economics and a member of the House of Lords Select Committee on Artificial Intelligence.

LONDON — In 1215, England adopted the Magna Carta to stop kings from abusing their power. Today, the new kings are big tech companies, and just like centuries ago, we need a charter to govern them.

The digital revolution is the greatest dynamic force in the world today. It affects everything from the intimacies of everyday life to geopolitical struggles and has made the world become one in a way that was never possible before. But at the same time, it is fracturing and dividing. Artificial intelligence and the Internet are the twin driving forces of these changes.

The evolution of AI has already gone through two distinct stages and is today moving into a third. The first — from the pioneering computing efforts of Alan Turing during World War II until the late 1980s — was dominated by governments and academia. The second phase was the emergence of Silicon Valley — “move fast and break things.”

The third phase we are now entering must bring the state and the wider public domain back into the picture. For a while, the positive breakthroughs of digital technologies — greater connectivity among like-minded peers or distant scholars, big data analysis of the genetic code, the convenience of online shopping — took center stage. But the negative aspects have proven to be profound, even though they took time to surface. They include threats to the very tissue of democracy itself — online movements have come to challenge or even displace mainstream political parties. These are emerging at the same time as what look like dramatic advances in machine learning.

I have been pondering these transformations over the past six months while working as a member of the House of Lords Select Committee on Artificial Intelligence in the U.K. We have interviewed some 60 experts from different backgrounds in industry, academia and the think-tank world. We aimed to distinguish, as much as possible, the hype and more remote, apocalyptic visions of digital transformations from real dangers.

Two overlapping tasks — each complex and difficult — now face our governments and public agencies. We have to seek to repair the mistakes of the past while preserving dynamism and innovation — no easy task. But at the same time, we must ensure that the new wave of AI-driven innovation is handled in a more proactive fashion, not allowed to rush willy-nilly through our lives.

The committee’s report proposes a far-reaching series of reforms that seeks to find a new balance between innovation and corporate responsibility. It echoes and draws upon legislation already pioneered by the European Union and some national governments, much of which is being incorporated into British law.

We lay out an overall charter for AI that can frame practical interventions by governments and other public agencies. The main elements of that charter are that AI should:

  • Be developed for the common good.
  • Operate on principles of intelligibility and fairness: users must be able to easily understand the terms under which their personal data will be used.
  • Respect rights to privacy.
  • Be grounded in far-reaching changes to education. Teaching needs reform to utilize digital resources, and students must learn not only digital skills but also how to develop a critical perspective online.
  • Never be given the autonomous power to hurt, destroy or deceive human beings.

These principles form the basis of a cross-sector AI code that should be developed both nationally and internationally. The committee calls for radical intervention to help break down digital corporations’ data monopoly and allow individuals greater personal control over their data and how it is deployed.

A range of policies are suggested as to how such aims can be achieved in a manageable and practical way. For example, the British government has already accepted that data trusts should be set up to share data ethically. A key issue here is how to restructure the National Health System (NHS). Patient privacy must be reconciled, for example, with the use of NHS data for research purposes and the exchange of data between medical specialists. We emphasize that such trusts should incorporate direct citizen representation and consultation. Within the U.K. at least, these principles and proposals should secure a wide measure of cross-party support.

We also addressed concerns on the geopolitical front, where domestic regulations intersect the practices of other nations. Fake news is not only a deep structural problem in the digital age; it has been directly weaponized by Russia and other countries. Forging international agreements over AI is likely to be difficult but extremely important.China, which uses digital tools and social media to further political aims, has the most powerful array of supercomputers in the world and is close to assuming the lead in developing AI. Our report concludes by proposing that a global summit of political leaders should be urgently organized to develop a common framework for the ethical development of AI at the global level.

The advantages of the digital revolution have been huge and have reshaped our lives, in many respects for the better. As in previous technological revolutions, societies must find a way to reap the benefits of innovation while containing the problems and hazards. A charter that protects the rights and liberties of citizens — a Magna Carta for the digital age — is the place to start.

This was produced by The WorldPost, a partnership of the Berggruen Institute and The Washington Post.