Balancing Aspirations and Practicality: AI Summit Expectations vs. Reality

Darshna Shah
8 min readSep 25, 2023

With just over a month to go until global leaders meet at Bletchley park for the first AI summit (November 1st-2nd), pressure is mounting around the urgency to gain clarity over the governance of AI. Where better to host such an event, than the home of codebreakers who decrypted enigma messages during the second world war. It is great to see the UK spearheading developments but this is no small feat and to develop a useful framework for mass adoption, interdisciplinary collaboration will be required to unfold the complexities of AI use.

AI generated image of world leaders at Bletchley park

Whilst generative AI models have been developed for a long time, the release of chat GPT 3.5 in November 2022 marked a pivotal moment when the world acknowledged this ground-breaking technology. It has unleashed an accelerated cascade of AI adoption and evolution. ChatGPT reached a million users in just 5 days, which is 15 times faster than Instagram, the second fastest platform to reach 1 million users.

ChatGPT Statistics and User Numbers 2023 — OpenAI Chatbot (tooltester.com)

Recent research by McKinsey analysed 63 use cases and estimated that generative AI alone could add up to $4.4 trillion annually, more than doubling the current annual GDP of the UK. Customer operations, marketing and sales, software engineering and R&D account for approximately 75% of the value that generative AI use cases could deliver. Whilst all sectors will be impacted by generative AI, Banking, high tech and life sciences were among the industries most significantly impacted with regards to percentage of revenue.

Why is the AI summit needed?

Microsoft, Google and other tech companies are rapidly working to build advanced AI systems, and whilst caution is exercised there are also huge amounts of competitive pressure. Over the last few years, there has been impressive progress in AI’s ability to do things that skilled, knowledgeable and educated humans would do. Higher cognitive power has enabled humans to shape the world to fit our own needs and desires, relative to other species. From diagnosing and curing disease efficiently, making more sustainable products by replacing environmentally unfriendly substances with synthetic proteins, to onboarding and training new employees, educating students, to summarising documents and drafting arguments for litigation, generating marketing personalised email campaigns, creating new music, editing films the possibilities with AI are limitless and if we get it right the benefits to humanity could be huge.

On the other hand, as AI capabilities improve and companies and governments pump in billions of dollars into AI developments, it is plausible that it could radically transform our society and displace humans as the most powerful beings on the planet. Cinema has been telling this narrative for decades through films like the Matrix, Blade Runner, Terminator and TV shows like Raised by Wolves, West World, and Humans (this last one is brilliant if you haven’t seen it). Whilst we may brush this off as fiction, more recently prominent AI research scientists, including the leaders of OpenAI, Google DeepMind and Anthropic have come forward to proclaim that mitigating the risk of extinction from AI should be a global priority alongside other large scale risks like pandemics and nuclear war. To delve further into how exactly human extinction could occur from AI development, 80,000 hours have comprehensively covered AI risks and prevention in this 30,000 word article.

At the lower end of the AI risk spectrum there is the need to consider safeguards against plagiarism, copyright violations, and branding recognition risks infringing on intellectual property rights. At the higher end of the AI risk spectrum there is risk of bioweapons, terrorist attacks, displacement of millions of people, increased global inequality, exploitation of vulnerable people, destruction of democracy and the integrity of critical infrastructure humanity relies on (e.g. the internet, financial systems, electricity grid, etc). Many of these concerns are not new, just amplified in the era of Generative AI where the combination of more data and more compute power accelerate and exacerbate the severity of concerns. For example, Apple’s credit card algorithm bias which discriminated against women leading to an investigation by New York’s Department of Financial Services. Similarly, Meta have been associated with gender discrimination in recruitment processes based on AI ad tooling. The COMPAS algorithm which predicts likelihood of criminals reoffending has also shown racial bias. The Cambridge Analytica scandal was also prolific in the mis-use of user data to influence the American 2016 election.

As AI grows more and more complex, trained on more and more data with billions of parameters, it becomes increasingly difficult to explain why AI models produce the result they do. There is a lot of uncertainty around how novel AI developments will effect our world as we know it today, but one thing that is certain is that it is developing fast and if we don’t address management of risks now the consequences could be dangerous. There is optimism in the growing number of safe AI policies, regulation and safety AI research efforts. The EU, a pacesetter in tech regulation, is pushing ahead with the AI Act, while in the US the White House has published a blueprint for an AI bill of rights and the US senate majority leader, Chuck Schumer, has published a safe innovation framework for developing AI regulations. Some researchers are attempting to improve the interpretability of models, but it is clear to be effective multidisciplinary agencies within society will need to come together to actively solutionise. Similarly, Microsoft, Anthropic, Google and Open AI have launched the Frontier Model Forum to address: Advancing AI safety research, identifying best practises, collaborating with policymakers, academic, civil society and companies and supporting efforts to develop applications that can help meet society’s greatest challenges.

Who will be present at the AI summit?

The UK is a world-leader in AI — ranking third behind the US and China. Our AI sector already contributes £3.7 billion to the UK economy and employs 50,000 people across the country, which has led some to suggest we could be well positioned to address accelerated changes and act as a bridge between East and Western countries. With that said, the G7’s Hiroshima AI framework, marks a commitment to co-ordinate global regulation.

World leaders such as French president Emmanual Macron, the Canadian prime minister, Justin Trudeau and Ursula von der Leyen, the president of the European Commission. Kamala Harris will be representing US president Joe Biden. China has also been invited to the summit but there are concerns from some based on suspicions of spying activity earlier this year. Alongside world leaders, prominent tech leaders and researchers from companies such as Google’s DeepMind, Open AI, Anthropic, Microsoft and Meta have also been invited.

Other academics have suggested that the summit participants is not diverse enough and most the advice on regulation will be coming from the big tech companies themselves, who are profiting the most from the use of AI technology. Tech companies have criticised the EU’s Artificial Intelligence Act for being too strict. Additionally, it is important to ensure different regulations are interoperable between nations with the same standards of safeguarding in place. There are many complexities to consider and a crucial balancing act between safeguarding against the risks of AI whilst also promoting innovation to be achieved. Current political instruments might be too rudimentary in design for the complex AI considerations require and themselves may need reform and innovation.

What will be discussed and achieved?

Broadly, the AI summit will cover AI safety with consideration to the potential of misinformation, AI ethics, transparency and guardrails, the use of AI in warfare and more. For comparison, the EU AI Act will apply a five-tiered risk label to AI systems, where higher risk the governance that will be imposed. It also imposes a total ban on the use of facial recognition and biometrics, which could be useful to counter-terrorism and the police. The Act will also mandate explicit reporting of data based on work of scientists and creatives to be reported to mitigate risk of copyright infringement. Sanctions for not complying could be up to 7% of revenue.

These are all very pertinent issues to initially focus on, but other considerations should include how to create legislation that doesn’t stifle creativity in AI. Also, 37% of the global population (3 billion people) have never used the internet and far fewer have an AI strategy. Consequently these countries do not have the adequate knowledge and resources to prioritise AI risks that may unfairly affect them, further growing inequalities. These are not simple issues to solutionise around, as is evident through equally challenging scenarios that surround a just transition in sustainability where changes implemented, inadvertently have negative impacts for some people and benefits for others.

Historically, it takes long due processes to get effective policy and regulation in place before it becomes active. It has been estimated it could take up to three years to have AI safety regulation in place, which could be problematic given the rate at which technology and mass adoption of AI is progressing. Whilst an ideal outcome of the summit would be to have an effective global AI safety framework to be able to test and iterate on progressive developments, what is a more likely outcome of the summit is some initial ideation with a roadmap of additional collaborative efforts to iron out details.

The task of creating effective safety AI regulation, whilst promoting innovation and equality is a challenging and complicated task, but we have faith in an optimistic outlook given the global and multidisciplinary efforts already being propelled forward. On that note, I’ll leave you with some useful references that explore how can govern AI safety effectively and what we should consider.

References

Economic potential of generative AI | McKinsey

Preventing an AI-related catastrophe — 80,000 Hours (80000hours.org)

Governance of superintelligence (openai.com)

Our approach to AI safety (openai.com)

How do we best govern AI? — Microsoft On the Issues

Microsoft, Anthropic, Google, and OpenAI launch Frontier Model Forum — Microsoft On the Issues

Principle on robustness, security and safety (OECD AI Principle) — OECD.AI

Regulating AI in the UK | Ada Lovelace Institute

Ensuring-Safe-Secure-and-Trustworthy-AI.pdf (whitehouse.gov)

Majority Leader Schumer Delivers Remarks… | The Senate Democratic Caucus

EU moves closer to passing one of world’s first laws governing AI | Artificial intelligence (AI) | The Guardian

Bletchley Park to Host AI Safety Summit | Bletchley Park

UK to host AI safety summit at start of November | Financial Times (ft.com)

U.K. Announces AI Safety Summit (forbes.com)

Rishi Sunak considers banning Chinese officials from half of AI summit | Artificial intelligence (AI) | The Guardian

Britain sets priorities for November global AI safety summit | Reuters

A Short History Of ChatGPT: How We Got To Where We Are Today (forbes.com)

ChatGPT Statistics and User Numbers 2023 — OpenAI Chatbot (tooltester.com)

Thanks for reading! For more AI related content follow me on LinkedIn or Twitter (X)

--

--

Darshna Shah

Chief AI Officer at Elastacloud. Organiser of the Data Science London meetup. www.datasciencewithdarsh.com