AI regulation: What does the future hold?

As one of the most widely debated and coveted topics, the discussion on whether Artificial Intelligence (AI) can be regulated will remain for many years to come.

Since its inception back in 1956 at a Dartmouth College conference [1] and with ChatGPT (Chat Pre-Trained Generative Transformer) dominating the headlines since its unveiling just over a year ago, a key question emerges: can AI really be regulated or is making the use of AI more transparent a more realistic and sustainable option?

AI trustworthiness landscape: public perception/economic landscape

When looking at the wider perspective of trust there isn’t solely intercontinental variance in trust in AI but also variances linked to the likes of generational and educational background. The Mitre Harris poll, for example, surveyed the thoughts of US adults and their level of trust in AI whilst also examining the gap between US adults and technology experts.

Alarmingly, the concern over the lack of transparency in AI increased in US adults from 69% in 2022 to 77% in 2023 [2], and when US adults trust levels were compared with tech experts on whether they’d be comfortable with the likes of government agencies utilizing AI for decision making, only 37% of US adults were content compared to 65% of tech experts [2].

Aside from the larger question of whether regulating AI is achievable, the latter Mitre Harris trust gap raises a thought-provoking sub-question: Is the creation of privacy legislation resonating with consumers on the ground? Although some thoughts were covered in the article AI’s influence and the future of privacy,in summary, explaining in a prompt before encouraging users to hit the accept button as to why particular pieces of data are being collected may be one of numerous steps to help consumers better understand the destination of their data.





More broadly from a country-wide perspective, the emerging BICS economies (including Brazil, India, China, and South Africa) showcase India as having the highest proportion of individuals willing to both trust (75%) and accept (67%) AI technologies [3].

The higher uptakes within India and the likes of China (67% willing to trust and 66% willing to accept) [3] aren’t surprising: China – through its New Generation Artificial Intelligence Development Plan launched back in 2017 – is racing against the likes of the US to become the next AI superpower. Furthermore, when it comes to adoption and the corresponding uptake of Emerging Technologies holistically, the BICS economies lead in this area compared to their continental counterparts.

Comparatively and somewhat concerningly within the United Kingdom (UK), only 34% of respondents are willing to trust and 20% willing to accept AI technologies [3]. The latter may present the UK as significantly lagging behind in uptake compared to other countries, but it’s anything but.

The National AI Strategy – a governmental framework published back in September 2021 to showcase and develop the UK’s prominence in AI on the world stage – showcased the UK as an AI tour de force, ranking 3rd in the world for AI publication citations per capita [6] [7], as well as being second to the USA in attracting mobile AI research talent [8].

In addition to these, the UK enjoys some of the strongest privacy laws in the world through the General Data Protection Legislation (GDPR). As to why with the sparkling statistics of the UK’s AI prominence aren’t being reflected in consumer perspectives presents a separate question that’ll be covered in another article. Next up, exploring the catalyst behind regulation.

What’s fuelling the regulatory push?

It’s a tentacle of factors but here’s a couple. One such area is ChatGPT: a Large Language Model (LLM) chatbot introduced by OpenAI back in November 2022.

This particular chatbot has undoubtedly transformed societies’ perception of AI both in its current capabilities and potential and as of November 6th, 2023 has surpassed 100 million weekly active users [3], opening the treasure chest of Generative AI – the ability for users to type an input prompt and the output to form either text, imagery or sound – an area bringing with it sizeable economic potential.

McKinsey, for example, are forecasting an estimated yearly contribution of between $2.4 and $4.4 trillion [4]. For comparison, the United Kingdom’s entire GDP in 2021 was £3.1 trillion [4]. In other words, there’s some potential for those who can jump, stay on, and ride the Generative AI wave.

However, on a more holistic level, another larger, more important area fuelling the regulatory drive is simply the breakneck speeds. AI is bringing plenty of innovation to the table, but the supersonic engines bolted on aren’t leaving entrails for many to cling to and understand.

In addition, from just one of many insight reports and surveys, the acceptance of AI varies widely. For example, a recent KPMG survey identified 67% of respondents have a low to moderate acceptance of AI [3] and only half believe the benefits of AI outweigh the risks [3].





When it comes to Generative AI, there’s also heavily contrasting views with a recent Capgemini study finding 67% of consumers globally could benefit from receiving Generative AI-based medical diagnosis and advice [4]. In addition, 53% of respondents warmed to trusting AI with financial planning [4].

Despite solid uptake in these areas, when it comes to awareness of potential risks the numbers are revealing: 49% of respondents aren’t concerned by Generative AI creating fake news, and only 33% are worried about copyright [4]. How these numbers shift as companies work to clarify the risks will be a very interesting one to watch.

Regulating AI: The broader context

With the motivation to regulate AI explored in the section above, it’s next time to explore what regulating AI really means. The terms of taking control, managing, and balancing innovation and privacy are all part of the story, painting a wider picture of an umbrella of immense complexity. 

The UK, for example, has already taken steps to allow for an innovation-based agenda within AI whilst balancing the undesirable and shrouded effects AI can leave behind. The UK National AI Strategy outlines numerous interdisciplinary actions and strategies to promote and understand AI’s long-term effects, including for example the formation of cross-government standards for transparency in algorithms.

However, it’s not only the UK that’s been taking rigorous steps in the shift towards regulation: the European Union’s proposing the AI Act: a first in AI law aiming to recognise and allow for AI’s benefits to shine through socially and economically but put the breaks on should AI bring harm [9].

From the steps taken above, determining who’s best positioned to potentially regulate AI will be an important step in the process. Realistically speaking, targeting regulation wouldn’t be from a singular body but instead collective efforts across industry, academia, and the government.

Industry-academia relations have been – and continue to be – invaluable during the educational reforms, but the link will become increasingly important as a new wave of legal and technical knowhow is developed to ensure that AI isn’t purely targeted for the potential pot of gold at the end of the rainbow.

Public thoughts continue to lean towards collective efforts as opposed to a single entity: taking the KPMG 2023 Insights Survey respondents feel the most confident with both academia and defence industries in the development, governance, and use of AI (sitting between 76-82% confidence) [3] whereas a third of respondents don’t have confidence in government/industry in the AI lifecycle (development to use) [3].

The latter data is somewhat concerning, especially with industry-academia relations providing the linchpin of knowledge that underpins advancement in AI technologies to society. 

So, can AI be regulated?

Combining all of the above, the last part turns to an area where the confluence of debate and controversy simultaneously reach their peak: whether regulating AI is possible. Although the debate on the potential for AI regulation will be more deeply covered in another paper, there’s a couple of highlighted considerations below.

An important point to explore is whether AI is at a stage of being understood in a way that both standardization and legal principles can be fairly and robustly developed. Let alone the sizeable complexity of AI technologies holistically there isn’t a singular degree of risk with this tool, so therefore to achieve tighter controls and adaptability in recognizing the varying degrees of risk will be a key step in determining the effectiveness of any future regulation.





For example, the European Parliament’s AI Act breaks down rules for each corresponding risk level, namely unacceptable risk, high risk, Generative AI, and limited risk [9]. From this, the balance of not over or under-regulating will be critical to ensure innovative practices can be fostered.

In addition, regulating requires understanding, and there’s concern about whether AI is understood to enough extent to lay such frameworks: an area recently explored at the UK Government’s AI Safety Summit in Bletchley Park [10].

The elephant in the room is the debate on whether regulating AI is really possible. Although the European Union has taken steps to get the ball rolling, AI is moving at supersonic – yes, supersonic – speeds, so much so that it’s zooming past government, industry, and academic expertise.

In the space of a word being written on a white paper, AI’s no longer visible in the distance. AI regulation will very much shift like tectonic plates and the outcome at the other end is one covered in dense fog.

Final thoughts

Creating AI regulation will be a long road with many yards of controversy and heated discussion ahead. It’ll also require more than collective efforts to ascertain whether it’s something that’s even possible.

Whether the technologies can be understood, quantifiably formulated, and managed at an exceptional pace will be critical going forward. However, ensuring a balance will be equally important: allowing companies to push the boundaries of AI to deliver groundbreaking cancer care and environmentally sustainable supply chains and allowing teachers new and intuitive ways to deliver content to the next generation of learners will always remain an integral part of this new era of digital transformation for years to come. 

Bibliography

[1] Founding of AI Lewis, T. (2014). A Brief History of Artificial Intelligence. [online] Live Science. Available at: https:// www.livescience.com/49007-history-of-artificial-intelligence.html.

[2] Mitre Harris AI Survey: MITRE-Harris Poll Survey on AI Trends Exploring Public Perceptions and Trust of Artificial Intelligence. (2023). Available at: https://www.mitre.org/sites/default/files/2023-09/ PR-23-2865-MITRE-Harris-Poll-Survey-on-AI-Trends.pdf. 

[3] KPMG Insights Trust in AI Survey: Gillespie, N., Lockey, S., Curtis, C., Pool, J. and Ali Akbari (2023). Trust in Artificial Intelligence: A global study. Trust in artificial intelligence. doi:https://doi.org/ 10.14264/00d3c94. 

[3] ChatGPT Number of Users: https://techcrunch.com/2023/11/06/ openais-chatgpt-now-has-100-million-weekly-active-users/? guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&g uce_referrer_sig=AQAAAJ-nFYS4fcbcQPE5zUAJL_EM9Go0VJ0SD2- CzDOnzIz9bdewVrr4kG-tnwjNHh6- 

zAcLS_InrCdlhzMbRKaB7k1oyXG7BJ6aTXYIWkAfaUdxPkjq8sQD Fl6Ugd2EvIoeU6HE20sH6DMpuBak-0PSyVxmoa759rMjxtUfXra1dm5 

[4] McKinsey Generative AI Economic Potential: Mckinsey (2023). Economic potential of generative AI | McKinsey. [online] www.mckinsey.com. Available at: https://www.mckinsey.com/ capabilities/mckinsey-digital/our-insights/the-economic-potential-of generative-ai-the-next-productivity-frontier. 

[4] Capgemini Generative AI Trust: Capgemini. (n.d.). 73% of consumers globally say they trust content created by generative AI. [online] Available at: https://www.capgemini.com/news/press-releases/ 73-of-consumers-globally-say-they-trust-content-created-by-generative ai/. 

[5] ICO High Risk Processing: ico.org.uk. (2023). Examples of processing ‘likely to result in high risk’. [online] Available at: https:// ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/ accountability-and-governance/data-protection-impact-assessments dpias/examples-of-processing-likely-to-result-in-high-risk/. 

[6] National AI Strategy: Investment into AI Companies 2020: National AI Strategy. (n.d.). Available at: https:// assets.publishing.service.gov.uk/media/614db4d1e90e077a2cbdf3c4/ National_AI_Strategy_-_PDF_version.pdf. (Page 18) 

[7] National AI Strategy: AI Publications Per Capita: Artificial Intelligence Index Report 2021 Artificial Intelligence Index Report 2021 2. (n.d.). Available at: https://aiindex.stanford.edu/wp-content/uploads/ 2021/11/2021-AI-Index-Report_Master.pdf. 

[8] Immigration Preference of AI Researchers: global.upenn.edu. (n.d.). The Immigration Preferences of Top AI Researchers: New Survey Evidence | Penn Global. [online] Available at: https://global.upenn.edu/ perryworldhouse/news/immigration-preferences-top-ai-researchers-new survey-evidence [Accessed 6 Dec. 2023]. 

[9] EU AI Act: European Parliament (2023). EU AI Act: First Regulation on Artificial Intelligence | News | European Parliament. [online] www.europarl.europa.eu. Available at: https://www.europarl.europa.eu/ news/en/headlines/society/20230601STO93804/eu-ai-act-first regulation-on-artificial-intelligence. 

[10] AI Safety Summit: Bletchley Park. (2023). Bletchley Park makes history again as host of the world’s first AI Safety Summit. [online] Available at: https://bletchleypark.org.uk/bletchley-park-makes-history again-as-host-of-the-worlds-first-ai-safety-summit/ [Accessed 7 Dec. 2023].