Artificial intelligence: The impact of hype, economics and law
Artificial Intelligence (AI) continues to be a subject dominated by hype across the globe. According to McKinsey’s technology trends outlook 2024, 2023 saw $36 billion of equity investment in Generative Artificial Intelligence whereas $86 billion was invested in applied AI [1]. Currently, the UK AI market is worth in excess of £16.8 billion, with forecasted growth of over £801.6 billion by 2035 [2], reflecting the sizeable economic and technological traction AI is taking across sectors.
Through the application of Computer Vision technology, for example, Marks and Spencer saw over 10 weeks an 80% reduction in warehouse accidents: just one of many ways in which AI is making a difference [3]. It however remains to be seen how effective coordinated governance will allow for innovation to thrive whilst maintaining cross-sector compliance.
Whilst the United Kingdom’s wider ambition is to be an AI Superpower, there has been continued debate and scrutiny about what constitutes effective AI regulation and how any continued iteration of such regulation would remain in alignment with key principles of law.
The United Kingdom’s vision for AI
The now-opposition government back in 2023 published its white paper, AI Regulation: A Pro-Innovation Approach. The plans outlined a principles-based approach to governance which was delegated to individual regulators.
While at the time it was thought that the UK’s approach and existing success in AI was down to effective regulator-led enforcement combined with technology-neutral legislation and regulations, the pace of AI highlighted gaps – both in opportunities and challenges – that would require addressing.
In the run-up to the 2024 UK General Election, regulation was of high importance in the Labour party’s manifesto under the “Kickstart economic growth” section, with the now-incumbent government seeking to strengthen AI regulation in specific areas.
Keir Starmer – both prior to and post-election – emphasised the need for tougher approaches to AI regulation through, for example, the creation of a Regulatory Innovation Office (RIO) [4]. The aim of a Regulatory Innovation Office would, inter alia, set targets for technology regulators and monitor decision-making speed against core international benchmarks while providing guidance according to Labour’s higher-level industrial strategy.
It, however, is not a new AI regulator and instead it will still be up to existing regulators to address AI within their specific fields. It is yet to be seen how a Regulatory Innovation Office would differ from the AI Safety Institute, the first state-backed organisation advancing AI safety established by the Conservative Government at the beginning of 2024 [5].
In addition to a new regulatory office, the planned creation of a National Data Library initiative aims to bring together existing research programmes and data-driven public services with strong safeguards and public benefit at its heart [4].
Wider issues in regulating AI
Government plans and economic potential aside, there are increasing expectations AI will solve the most pressing issues facing humanity. However, as a result of the pace there is a wider endemic issue of digital technologies challenging the functioning of law. In the long run, both a proportionate and future proof regulatory approach will be required regardless of where in the world approaches are developed.
To start with, defining AI is not straightforward: there is not a widely accepted definition, and considering various strands of sciences are affected either directly or indirectly by AI there is a risk of creating individualised definitions based on the specific field. Moreover, different types of intelligence could result in varying definitions of AI, even if the technological scope is not considered.
Adding into the mixture the fields of Computer Science and Informatics – both not being directly mentioned in the AI Act, for example – demonstrates a lack of a commonly agreed technical definition of what AI is or could be. What follows from this are both general and theoretical questions and how this could be moulded into a legal definition.
If, for example, both the principles of legal certainty and the protection of legitimate interests are taken, the existing definition of AI does not satisfy key requirements for legal definitions. The result instead is definitions that are ambiguous and debatable in practicability, creating a bottleneck in formulating domestic or even international AI regulation.
What is ultimately important is that any regulatory goal is aligned with the values of fundamental rights and the concrete protection of legal rights. Take the precautionary principle – an approach to risk management – which outlines that if a policy or action causes harm to the public and there is not a scientific agreement on the issue, that policy or action in question should not be carried out.
Applying this to AI becomes problematic as the effects in many cases are either not assessable just now or, in some cases, not at all. If then a risk assessment is carried out according to the proportionality principle – where the legality of an action is determined by the balance between the objective, means, and methods as well as the consequences of the action – where limited factual knowledge is obtainable, the actionability of such assessment becomes increasingly challenging.
Instead, it is the intersection of the technical functionality and the context of the application where a risk profile of an AI system can be obtained, but even then from a regulatory perspective these systems can vastly differ in risk profile.
Conclusion
The versatility of AI systems will present a range of opportunities and challenges depending on who uses them, what purposes they are used for and the resulting risk profiles. Attempting to regulate AI – which frankly speaking is an entire phenomenon with increasingly infinite branches of use cases – through a generalised Artificial Intelligence Act will not work.
Instead, deep-diving into the characteristics and the use cases of the differing algorithms and AI applications is more important and is strategically more likely to result in effective, iterative policymaking that is beneficial to society and innovation.
Bibliography
[1] McKinsey Tech Outlook 2024: www.mckinsey.com. (n.d.). McKinsey Technology Trends Outlook 2022 | McKinsey. [online] Available at: https://www.mckinsey.com/capabilities/mckinsey-digital/ our-insights/the-top-trends-in-tech#/.
[2] AI Growth and Adoption: Hooson, M. (2024). UK Artificial Intelligence (AI) Statistics And Trends In 2024. [online] Forbes Advisor UK. Available at: https://www.forbes.com/uk/advisor/ business/software/uk-artificial-intelligence-ai-statistics-2024/.
[3] M&S Computer Vision Example: Protex.ai. (2023). Marks and Spencer reduced incidents by 80% in their first 10 weeks of deployment. [online] Available at: https://www.protex.ai/case-studies/ marks-and-spencer#:~:text=This%20momentum%20led%20to%20an [Accessed 5 Sep. 2024].
[4] Labour Party Manifesto: The Labour Party. (2024). Kickstart economic growth – The Labour Party. [online] Available at: https://labour.org.uk/change/kickstart-economic-growth/#innovation [Accessed 30 Aug. 2024].
[5] AI Safety Institute: Aisi.gov.uk. (2024). The AI Safety Institute (AISI). [online] Available at: https://www.aisi.gov.uk [Accessed 30 Aug. 2024].
Interested in more from Ana? Make sure to give the articles below a read: