Artificial intelligence (AI) and regulations have become increasingly important as the technology continues to advance and permeate various aspects of society. The purpose of AI regulations is to establish guidelines, frameworks, and standards to ensure the responsible development, deployment, and use of AI systems.
It is important to note that AI regulations can vary significantly between countries and regions, reflecting different legal frameworks, cultural values, and policy priorities. Keeping up with the evolving landscape of AI regulations requires monitoring developments at both national and international levels.
Such was the topic discussed at the Asia Tech x Singapore (ATxSG) held at Singapore Expo from 7 to 9 June 2023. The event serves as an unparalleled platform where visionaries, experts, and enthusiasts from diverse sectors come together to explore the latest tech trends, tackle pressing challenges, and unlock countless opportunities.
Breaking Barriers: A Global Dialogue on AI Risk Policy and Regulation
Amidst the global landscape, a wave of new AI models has surfaced, prompting nations, regions, and international entities to grapple with the crucial matter of AI regulation. It is evident that the prevailing mechanisms in place are inadequate to address the complexities at hand. In this context, it becomes paramount to investigate how various countries perceive the regulation of AI.
Notably, Singapore has taken a pioneering step in this direction with the development of A.I.Verify, an AI governance testing framework and toolkit, by IMDA and PDPC. Meanwhile, in the United States (US), the National Institute of Standards and Technology (NIST) has unveiled its highly anticipated Artificial Intelligence Risk Management Framework. Furthermore, the European Union (EU) is diligently working on finalising its inaugural legal framework on AI, aiming to adopt a risk-based approach and regulate the prohibition of certain AI systems.
As these developments unfold, it becomes crucial to examine the landscape of AI regulation, both through existing laws and recent enactments.
This was discussed at length during both panel discussions held during the last two days of Asia Tech 2023. At the first panel discussion, titled "Global Comparative Perspectives on Regulating AI", relevant industry leaders were present to give their views, including Jason Tamara Widjaja, Director of Artificial Intelligence from MSD; PeiChin Tay, Senior Policy Advisor from Tony Blair Institute for Global Change; Jason Grant Allen, Director from SMU Centre for AI & Data Governance; and Lian Jye Su, Chief Analyst, Applied Intelligence from Omdia. The session was moderated by Andrew Staples, Regional Head, (APAC), Policy & Insights from Economist Impact.
The second panel discussion, titled "AI Risk Policy & Regulation ─ What to Look Out for in 2023", saw prominent experts like Irakli Beridze, Head of Center for AI Robotics from United Nations Interregional Crime and Justice Research Institute; Jason Grant Allen, Director from SMU Centre for AI & Data Governance; and Simon Chesterman, Vice Provost, Senior Director (AI Governance) from National University of Singapore (NUS). The session was moderated by Neha Dadbhawala, Director of Martech from McAfee.
Current AI Regulatory Landscape has Garnered Significant Attention
In less than a decade, AI has transitioned from a niche interest to an integral part of our daily lives. Consequently, policymakers have displayed an increasing interest and focus in this domain. Unsurprisingly, major players in the realm of AI regulation include China, Canada, the EU, the United Kingdom (UK), and the US. "While many of these areas are still evolving, we can observe emerging clusters along the spectrum of risk minimisation and benefit maximisation spectrum," PeiChin Tay explained.
She continued that on one end of the spectrum, the US and the UK take a similar approach driven by their emphasis on economic growth and business innovation. On the other end of the spectrum, Canada and the EU prioritise safeguarding fundamental human rights and minimising harm, all while nurturing innovation. Situated in the middle, China introduces additional layers of focus on information control, surveillance capabilities, and societal safety and security, aiming to support its businesses within that framework.
To elaborate further, the US and UK adopt a contextual governance approach, with the UK's white paper outlining a pro-innovation stance and a cross-sector, context-specific, and principles-based framework. Building upon this foundation, regulators will develop specific regulations tailored to their respective domains. In contrast, the EU pursues a horizontal risk-based approach, as exemplified by the influential General Data Protection Regulation (GDPR) act, prioritising rights protection and harm mitigation without hindering business growth. Risks are categorised as unacceptable, high, moderate, or little to no risk.
Canada shares a similar risk-based approach. Notably, India currently does not fall within this spectrum, choosing a light-touch approach to generative AI and abstaining from enacting specific legislation. However, it is important to acknowledge that these dynamics are still evolving and emerging.
Understanding the Various Actors, Their Motivations, and the Range of Regulatory Approaches is Crucial
Jason Grant Allen added that as we explore government regulation in the classical sense, it is essential to consider not only governmental actors but also intergovernmental actors, corporates, and the industry itself within the broader definition of governance.
One notable aspect of governmental regulation is industry self-regulation and the adoption of voluntary standards and codes of ethics. Civil society bodies like the Tony Blair Institute, academia, and universities also play a significant role in shaping regulatory ideas. The motivations and drivers of these actors differ from those of governments. National security, geopolitics, and economics often serve as important backdrops in this realm. We observe a commercial climate and an arms race among big-tech players in the development and deployment of AI technologies. Additionally, the nuclear arms race, particularly in the Indo-Pacific region with critical supply chains, adds further dimensions to the drivers and motivations.
It is crucial to acknowledge that many actors advocating for regulation are genuinely motivated by a desire for ethical, responsible, and trustworthy AI systems that serve the greater good of humanity. Corporations are driven by the need to gain and maintain trust from both governments and consumers. There is a strong desire to harness beneficial innovation through regulation, especially in technologies that have potential risks of harm. The idea of a well-regulated technology sector contributing to the overall betterment of humanity is prevalent.
As we zoom out and adopt an academic perspective, we can explore a typology of different regulatory approaches. This spectrum ranges from soft self-regulation and quasi-regulation in the middle to international standards enforcement and more rigid regulation, as seen in the EU and China, for example. It's important to recognise that technology itself is not the sole focus of regulation; there is a socio-technical aspect to consider.
Different modes or types of regulation may be suitable for regulating specific elements, such as robustly regulating the data set, while employing alternative approaches for other aspects.
We are Still in the Early Stages of Developing a Comprehensive Governance and Regulatory Framework for AI
As for how close are we to achieving a practical global framework when it comes to the governance and regulatory framework, Lian Jye Su pointed out that it was a critical question that requires us to examine different phases. Currently, there are robust data regulations in place, with state actors emphasising the importance of data localisation and anonymisation to protect privacy and ensure jurisdictional control. However, there is still room for improvement, particularly in terms of clarifying data ownership and creating explicit guidelines for data usage.
In terms of intellectual property (IP), there is ongoing discussion surrounding the need for robust safeguards and firewalls to protect valuable assets. While progress has been made, there is still work to be done in defining clear boundaries and addressing potential disputes. It is essential to strike a balance between encouraging innovation and safeguarding IP rights.
When it comes to the ethical considerations of AI, governments are grappling with various approaches. The focus often oscillates between assessing risks and maximising benefits. It is important to consider the implications of these approaches on policy decisions and the implications they may have on future appointments and regulations.
There are proponents of auditing AI systems, advocating for independent assessments to ensure compliance and mitigate risks. However, it is important to acknowledge the complexity of AI technology, which requires specialised knowledge and expertise. Keeping up with the rapid pace of AI advancements makes it challenging to have a team of auditors who possess comprehensive understanding and can provide reliable advice.
Furthermore, as AI evolves, the challenges expand beyond individual systems and encompass the integration of diverse data sources on a national and international scale. Ensuring compliance and maintaining a holistic view of AI applications becomes increasingly complex, considering potential data linkages and cross-functional implications.
Organisations Need to Actively Innovate and Operationalise These Initiatives
“Rather than solely discussing governance and compliance, I believe it is crucial to consider the importance of innovation and implementation. It is not just about having policies and responsibility statements; organisations need to actively innovate and operationalize these initiatives," Jason Tamara Widjaja explained.
From his standpoint, two key considerations arise. Firstly, at an enterprise level, there is a risk of not fully realising the benefits of regulation. Striking a balance is essential; organisations must listen to voices advocating for governance and compliance, but not at the expense of innovation. It is crucial to find a middle ground that optimises both aspects.
Secondly, there is a narrative that often arises around the dichotomy of innovation versus regulation. However, this spectrum does not always apply, especially in highly regulated industries. Consider the perspective of someone working in a compliance-driven role. Their instinct might be to do nothing without explicit guidance. In such cases, regulation can serve as an enabler and accelerator, rather than merely acting as a gatekeeper. It challenges the notion that regulation is always an obstacle to progress.
Lastly, we must acknowledge the diverse interpretations of regulations. The conversations happening at the policy level may not always align with the experiences of non-native English speakers or those utilising application programming interface (API) technologies. Therefore, it is important to provide detailed use cases and practical examples to ensure a comprehensive understanding of the implications. The industry is eagerly awaiting clarity and direction regarding contracting and compliance with the relevant directives.
Exploring the Ethical Considerations That Need to be Taken Into Account
According to Jason Grant Allen regarding the traditional approaches to regulation, we aim for principles, values, and outcomes-based regulation typically. Rather than relying on highly prescriptive legislation that requires frequent updates through political deliberation and consensus building, we strive for a more flexible framework. Depending on the legal system involved, this may involve a high-level statute accompanied by regulations that can be amended periodically by the relevant authorities.
However, when it comes to fast-moving and emerging technologies like AI, there are ongoing debates about the adequacy of these well-established approaches. It is essential not to overlook the role of ambient law, which includes existing laws such as privacy and data protection regulations. Regulatory design must consider the unique challenges posed by new and disruptive technologies, particularly at critical junctures like the current uptake of generative AI tools.
When addressing these challenges, we have multiple avenues for regulatory intervention. We can explore bespoke regulatory approaches tailored specifically to AI, or we can leverage existing laws and regulations by adjusting certain parameters to address AI-related issues. Additionally, we should acknowledge the importance of different types of regulations, such as top-down state-based regulations, international standards developed by reputable bodies, and voluntary industry initiatives. While the latter may not be the ultimate solution, they can play a significant role in the interim, avoiding the pitfalls of stagnation over time.
Furthermore, it is crucial to recognise that AI is not a singular entity but a complex amalgamation of various components. It encompasses data, models, software, hardware dependencies, and even social aspects. The human element plays a critical role as users and decision-makers in organisational contexts. Therefore, regulatory considerations must extend beyond the technology itself and encompass the broader organisational structures and decision-making processes involved. Organisations can both benefit from regulation and be subject to it, ensuring responsible and ethical use of AI tools.
Overregulation can Constrain Innovation and Expose the Population to Harm
There is a genuine concern when regulation becomes excessive and stifles innovation, forcing it to move elsewhere. "This was highlighted in the previous panel where it was suggested that Singapore should adopt a regulatory regime that strikes a balance, protecting against the harms of AI misuse without hindering innovation," Simon Chesterman mentioned. Overregulation can constrain innovation and expose the population to harm, so the question arises: What should we do?
“Sometimes, the problem is misunderstood as simply determining which regulations to adopt. However, in my book, I emphasise that for most use cases of AI, the starting point should be applying existing laws and governance that already address similar issues, such as plagiarism. Whether it involves cheating by plagiarising someone's work or passing off machine-generated content as one's own, the normal rules should be applied as far as possible. Nevertheless, there will be cases where adjustments and specific regulations are necessary," he continued.
In the AI regulatory space, there are three levels of regulation: governmental, international, and internal to organisations. The government level holds the most power, as it has the authority to enforce regulations with criminal penalties, such as imprisonment for negligence resulting in harm. However, government regulation alone is necessary but insufficient. Some level of international coordination and collaboration is required to avoid regulatory arbitrage and prevent a race to the bottom.
Yet, the most critical aspect lies within organisations themselves. The internal governance and compliance structures play a vital role. Most individuals comply with laws because they understand the potential consequences, both legally and reputationally. The danger lies in how organisations navigate the transition from risk assessment to responsible practices. It is crucial to shift the mindset from merely avoiding regulatory problems to actively preventing harm to consumers. The recent downsizing of responsibility teams by big tech companies and the expedited path to market pose significant risks. For instance, social media platforms like Twitter encounter challenges in understanding cultural norms and facilitating meaningful interactions.
When Discussing AI, it is Crucial to Understand What Makes it Unique
Irakli Beridze expressed that when discussing AI, it is crucial to understand what makes it unique, "On one hand, we have an enormous amount of data available, more than ever before, and the computational power to analyse and interpret it. On the other hand, we must develop sophisticated algorithms and frameworks to effectively process and derive insights from this data.”
However, I believe it is essential to address the problems inherent in the data market. It is important to acknowledge these issues to avoid unacceptable practices. As participants in this panel, we all share a commitment to inclusivity and the diversity of opinions. Our discussions should not be exclusive or limited to a specific agenda. If you were to ask me about the role of the United Nations, I would emphasise that inclusivity and diverse perspectives are paramount.
We need to collaborate and work together to establish frameworks and policies that will shape the future of AI. This coalition of stakeholders needs to ensure that evidence-based decisions are made. Despite our disagreements, we must find common ground through extensive participation. The global landscape of AI is evolving rapidly, with countries and various sectors making significant investments. However, it is crucial to consider that a significant portion of the world's population belongs to countries that may not have strong governmental structures or resources. We cannot leave these populations behind or allow them to suffer from poor living conditions. Therefore, redefining the framework for AI requires us to address these disparities and prioritize the well-being of all individuals globally.
The Impact of AI, Especially on Education
Jason Grant Allen found it fascinating to explore different perspectives on education, especially considering the impact of AI in recent years. "AI has brought education closer to individuals who may not have had access to resources or traditional schooling. This raises questions about the current regulations in place to support primary education and how they might evolve with the emergence of AI and educational technology," he pondered.
While it is not solely AI, the internet and mobile connectivity have played a significant role. Some organisations have leveraged these technologies to provide educational opportunities to individuals who may never have the chance to travel or study in a physical classroom. The potential for educational transformation is immense, fuelled by the accessibility of information.
At a broader level, this shift is not only impacting mass education but also changing the nature of work. AI, as one aspect of this shift, is altering our relationship with information. Similar to the transition from oral tradition to writing, and from writing to the printing press, our connection with information is once again evolving. Imagine the possibilities of having a personalised tutor who understands your individual needs, tailors the educational experience, and guides you accordingly. This opportunity extends beyond subjects like mathematics and holds potential for various fields.
However, we must be cautious of becoming overly reliant on AI and technology. It is important to maintain certain cognitive skills and abilities. For instance, relying on AI to remember phone numbers or navigate using maps might make us more dependent and potentially diminish our overall skills. If we reach a point where we struggle to construct arguments or write essays without AI assistance, it goes beyond using AI as a tool and starts becoming a crutch.
Design Regulations to Empower Rather Than Limit
One of the crucial issues we face in education is determining where to begin and what aspects to prioritise: How can we ensure that we equip individuals with the necessary skills to remain competitive in the ever-evolving job market and integrate technology seamlessly into our lives?
The answer lies in a collective effort to enhance our knowledge and invest significant energy in preparing the next generation. By empowering them with relevant skills, we enable them to thrive in a competitive global landscape and adapt to the demands of the future. This is a challenge that extends across the majority of countries worldwide.
Irakli Beridze elaborated that one approach is to leverage technology and invest in educational initiatives that foster a comprehensive understanding of its applications. For instance, we often discuss the accessibility of tools like ChatGPT, which can be an invaluable resource for learning. However, it is equally important to go beyond simply using these technologies and focus on teaching the next generation how to effectively utilise them. This includes instructing them on writing prompts, as it is a valuable aspect of education.
By equipping the younger generation with the knowledge and skills necessary to leverage technology wisely, we can empower them to shape a better future. This requires a collaborative effort from educators, policymakers, and society as a whole to ensure that our educational systems evolve and adapt to the changing landscape.
Finding the Right Balance in AI Regulation is Essential
Understanding the various actors, their motivations, and the range of regulatory approaches is crucial in navigating the complex landscape of AI regulation. By exploring these factors, we can strive to establish a regulatory framework that promotes ethical, responsible, and beneficial AI innovation while addressing the broader considerations of national security, geopolitics, and societal impact.
Currently, we are still in the early stages of developing a comprehensive governance and regulatory framework for AI. It is vital to nurture expertise, establish a strong foundation, and continue fostering dialogue and collaboration among stakeholders.
The design of AI-related law and regulation requires careful thought to maintain adaptability and flexibility. Balancing principles and outcomes-based approaches with the need for timely updates and ethical considerations is paramount. Finding the right balance in AI regulation is essential. We should start with existing laws and governance while recognising the need for specific regulations in certain cases.
By considering different regulatory approaches, leveraging existing laws, and accounting for the broader social and organisational aspects, we can establish a regulatory framework that effectively addresses the challenges posed by AI while fostering innovation and responsible AI deployment.