ChatGPT and Artificial Intelligence (AI) have been inundating the news agenda recently, including significant moments such as OpenAI chief executive Sam Altman facing questions at a hearing in the US Senate in May. This came hot on the heels of the ‘Godfather of AI’ Google’s Geoffrey Hinton talking to the press to share his regret about devoting his career to AI.
When it comes to regulation, countries are taking different stances. While the UK plans to adopt a more proportionate approach to AI regulation in an attempt to encourage innovation, the EU has instead sought to implement more prescriptive, product-safety-focused legislation, which imposes a detailed set of technical and organisational requirements on developers and users of ‘high-risk’ AI. However, EU AI legislation is not likely to come into force before 2026 at the earliest, which leaves blurred lines for businesses navigating and preparing for the variations between the UK, EU and US regulation.
With each industry in the same boat, should data leaders be taking matters into their own hands?
What are the impending risks?
All eyes are on AI. Explosive growth can be tricky to fathom and key figures within the AI space are now facing privacy and security implications that were perhaps not front of mind before. AI systems rely on vast amounts of data and if appropriate protections are not in place, this data can be vulnerable to breaches and fraud.
Like most technological advancements, innovation around AI is developing at a much faster pace than regulation can. It also often takes a bit of a trial run to identify where the issues are, and this is exactly where AI is right now.
Everyone wants to test, investigate and ‘risk proof’ it, particularly due to its vast potential.
The challenge will be finding the balance between regulation and tech evolution. Clamping prohibitive legislation around a developing bubble of tech will stifle its growth and inhibit potential. On the other hand, without safety measures, you are essentially opening the door and welcoming in risk.
But there is no doubt that AI has great potential for most sectors, such as life-saving healthcare, but equally for smaller businesses as it can significantly reduce costs, save time and therefore boost efficiency.
Why is regulation so crucial?
Headlines and media buzz – although important for awareness – are dangerous in that they can easily whip up fear. Regulation will aim to ensure that AI is developed, deployed, and used in an ethical manner, with appropriate consideration given to privacy and data protection. It will also help to set up parameters to prevent the misuse of AI that could lead to discrimination and other unethical practices. With AI’s transformative potential extending to the wider economy and the job market, regulation will serve to mitigate pitfalls and any negative impact on employment, ensuring fair labour practices.
Finding the way to reap the benefits, whilst simultaneously building data governance and safety policies into the very structure of the movement should be the goal, rather than implementing them as an afterthought.
It requires collaboration between policymakers, industry experts and researchers to develop agile, adaptive, and forward-thinking regulatory frameworks that keep pace with the rapidly evolving landscape. We should be sharing experiences to devise frameworks that work across the board and instead of waiting on official legislation to catch up, data leaders should be taking matters into their own hands and implementing interim regulation.
For this to work, it is important to open up easy lines of communication between industry leaders who too often operate in silos, only sharing information after an incident such as a data breach.
Put simply, the universal impact of AI requires a universal response.
Interested in being an integral part of an exciting and burgeoning sector? Find out more today.