Artificial Intelligence (AI) is transforming industries across the globe and has become an important topic for policymakers worldwide. AI technologies have the potential to bring tremendous benefits to society, but they also raise significant challenges, such as privacy, fairness, accountability, and security.
Therefore, governments around the world are developing AI regulatory frameworks to ensure that AI development and deployment is conducted in a responsible and ethical manner.
With a focus on the planned European framework and existing Asian frameworks, this article will provide a summary of the AI regulatory environments in both regions.
European Framework
The European Union (EU) has been at the forefront of developing a comprehensive regulatory framework on AI. The European Commission presented a proposal for a regulation on AI on April 21, 2021. The proposal aims to establish a legal framework for AI that promotes innovation while ensuring that AI is safe, respects fundamental rights, and complies with EU values.
The proposed regulation categorizes AI systems into four risk categories: unacceptable risk, high-risk, limited risk, and minimal risk. High-risk AI systems, such as those used in healthcare and transport, would be subject to strict requirements for transparency, accountability, and human oversight. Unacceptable AI systems, such as those that manipulate human behaviour, would be prohibited.
In addition, the regulation mandates that AI developers conduct risk assessments and ensure that their systems satisfy the criteria for each risk category. Moreover, the proposal stipulates that AI systems must be verifiable and that the data used to train them must be of high quality and representative of the diversity of the EU population.
Asian Framework
In Asia, several countries have developed or are developing regulatory frameworks on AI. China has been leading the way in this regard with its national AI development plan and guidelines for AI ethics. In 2019, China released its New Generation Artificial Intelligence Development Plan to make China the world’s leading AI power by 2030. The plan includes goals such as improving AI research and development, promoting the integration of AI in various industries, and developing a world-class AI talent pool.
Japan has also been active in developing AI regulations. In 2019, the Japanese government released its AI Utilization Strategy, which aims to promote the use of AI in various sectors while ensuring that AI is used safely and securely. The strategy includes guidelines for ethical AI development, such as ensuring transparency and accountability, protecting personal information, and promoting human-centric AI.
South Korea has also been developing its regulatory framework on AI. In 2019, the Korean government released its AI Ethics Charter, which sets out guidelines for the development and deployment of AI that respect human rights and values. The charter includes principles such as transparency, accountability, fairness, and security.
Scope of Regulation on AI
As AI technologies continue to develop and become more pervasive in society, it is essential to have regulatory frameworks in place to ensure that they are developed and used responsibly and ethically.
The proposed European framework and existing frameworks in Asia demonstrate that governments worldwide are taking this issue seriously and are working to strike a balance between promoting innovation and protecting human rights and values. It will be interesting to see how these frameworks evolve over time and how they impact the development and deployment of AI worldwide.
The scope of regulation on AI in Europe and Asia varies depending on the country and region. However, there is a growing consensus that regulation is necessary to ensure that AI is developed and used responsibly and ethically.
Regulation on AI typically includes requirements for transparency, accountability, fairness, and privacy protection. It also includes requirements for the development of AI systems that are robust, reliable, and free from bias.
Overall, the scope of regulation on AI in Europe and Asia reflects a growing recognition of the need to regulate the development and use of AI to ensure that it is used in a way that benefits society while minimizing potential risks. As AI continues to evolve, the scope of regulation on AI will likely expand to address new challenges and opportunities.
What Should AI Developers Do?
Artificial Intelligence (AI) is transforming industries across the globe and has become an important topic for developers worldwide. As AI technologies continue to develop, developers must take responsibility for ensuring that AI is developed and used responsibly and ethically. Here, we will take a look at what AI developers should do to ensure that their AI systems are responsible and ethical.
Understand the Risks and Benefits
AI developers must first understand the risks and benefits of AI systems. They should be aware of the potential risks associated with AI, such as privacy, security, fairness, and accountability. At the same time, developers must also understand the potential benefits of AI, such as improved efficiency, accuracy, and innovation. By understanding the risks and benefits of AI, developers can develop AI systems that maximize the benefits while minimizing the risks.
Ensure Ethical and Responsible AI Development
Developers should ensure that their AI systems are developed in an ethical and responsible way. This means that AI systems should be designed to respect fundamental human rights, such as privacy, autonomy, and dignity. Developers should also ensure that their AI systems are transparent, accountable, and auditable so that users can understand how the AI system works and can hold developers accountable for their actions.
Developers should also ensure that their AI systems are fair and unbiased. This means that AI systems should be developed using representative and diverse data sets and should not perpetuate existing biases and discrimination.
Engage in Continuous Learning and Improvement
AI developers should engage in continuous learning and improvement. This means that developers should stay up to date with the latest developments in AI and should continuously monitor their AI systems to identify potential risks and areas for improvement.
Developers should also be open to feedback and should engage with stakeholders, such as users, regulators, and civil society organizations, to ensure that their AI systems are meeting the needs of society and are aligned with societal values.
Collaborate with Experts in Other Fields
AI developers should collaborate with experts in other fields, such as ethics, law, and social sciences, to ensure that their AI systems are developed in a responsible and ethical way. Developers should seek input from experts on ethical and legal frameworks and should work to incorporate their feedback into the design of AI systems.
In general, AI developers have a critical role to play in ensuring that AI is developed and used in a responsible and ethical way. By understanding the risks and benefits of AI, ensuring ethical and responsible AI development, engaging in continuous learning and improvement, and collaborating with experts in other fields, developers can help ensure that AI systems are developed in a way that respects fundamental human rights and societal values.
By doing so, developers can help ensure that AI continues to bring benefits to society while minimizing its potential risks.
Conclusion
In conclusion, the development and deployment of artificial intelligence (AI) have become a critical area of focus for policymakers around the world, and Europe and Asia have taken a proactive approach to regulate AI. Both regions recognize the importance of ensuring that AI is developed and used in a way that respects fundamental human rights and societal values.
The proposed regulatory framework in Europe includes a risk-based approach to regulating AI, with certain high-risk AI applications subject to strict requirements. The regulation also includes specific requirements for AI systems used in critical infrastructure and requirements for AI providers to ensure that their AI systems are developed and used in a way that respects fundamental rights.
In Asia, countries such as Japan, China, and Singapore have released guidelines and frameworks for the ethical development and use of AI. The guidelines include principles such as transparency, accountability, fairness, and privacy protection.
Overall, the scope of regulation on AI in Europe and Asia reflects a growing recognition of the need to regulate the development and use of AI to ensure that it is used in a way that benefits society while minimizing potential risks.
As AI continues to evolve, it is likely that the scope of regulation on AI will expand to address new challenges and opportunities. Ultimately, a responsible and ethical approach to AI development and deployment is essential to ensuring that AI continues to bring benefits to society while minimizing its potential risks.
Marc-Roger Gagné MAPP