Artificial Intelligence (AI) has rapidly evolved in recent years, transforming industries and impacting various aspects of our daily lives. With this rapid advancement comes the need for robust standards and regulations to ensure the ethical and secure development, deployment, and use of AI technologies. Standardization plays a pivotal role in ensuring the compliance of AI systems, particularly in the context of cybersecurity. In this article, we will explore the key activities of major standards-developing organizations, focusing on the International Organization for Standardization (ISO), the International Electrotechnical Commission (IEC), the European Committee for Standardization (CEN), the European Committee for Electrotechnical Standardization (CENELEC), and the European Telecommunications Standards Institute (ETSI) in supporting cybersecurity for AI.
Relevant Activities by Standards-Developing Organizations
Standardization in AI is essential to provide guidance and ensure the compliance of AI technologies with cybersecurity and ethical considerations. Many standards-developing organizations (SDOs) are actively engaged in creating guides and standardization deliverables to address AI. The primary objective of these efforts is to assess whether existing provisions apply to AI and, if not, develop new techniques or adapt existing ones. The focus here is on harmonized standards within ISO, IEC, CEN, CENELEC, and ETSI.
CEN-CENELEC
CEN-CENELEC addresses AI and cybersecurity through two joint technical committees:
- JTC 13 ‘Cybersecurity and data protection’: This committee aims to transpose relevant international standards into European standards (ENs) in the information technology domain. It also develops new ENs where gaps exist, in support of EU directives and regulations. JTC 13 has identified a list of ISO-IEC standards that are of interest for AI cybersecurity. The most prominent among these are the ISO 27000 series on information security management systems.
- JTC 21 ‘Artificial intelligence’: This committee is responsible for the development and adoption of standards for AI and related data. It provides guidance to other technical committees concerned with AI. JTC 21 addresses the extended scope of cybersecurity, including trustworthiness characteristics, data quality, AI governance, and management systems. It has identified a list of ISO-IEC standards with direct applicability to the draft AI Act and is considering their adoption/adaptation.
In addition, JTC 21 has identified two gaps related to AI systems risk management and AI trustworthiness characteristics and is preparing to develop new standards to address these gaps.
ETSI
ETSI has established an Operational Co-ordination Group on Artificial Intelligence to coordinate standardization activities related to AI. The Security of AI (SAI) group has been active in developing reports to address the challenges AI poses to systems. ETSI’s technical bodies are also addressing the role of AI in various sectors, including healthcare and transportation.
ISG SAI is a pre-standardization group focusing on specific characteristics of AI and has published several reports related to AI threats, data supply chain security, and hardware’s role in AI security. Additionally, ETSI ISG SAI is working on developing reports covering aspects such as explicability, privacy, and traceability of AI models, among others.
ISO-IEC
ISO-IEC conducts its AI-related standardization work in JTC 1 SC 42. They have published or are developing several standards, including those related to AI concepts and terminology, AI framework, AI management systems, AI risk management, and data quality for analytics and machine learning.
Other Organizations
Numerous horizontal and sector-specific standardization organizations are actively involved in AI standardization. These organizations include the International Telecommunication Union (ITU), the Institute of Electrical and Electronics Engineers (IEEE), and SAE International. These organizations contribute to the evolving AI landscape, each with its own focus and initiatives.
One noteworthy project is the SAE AIR AS6983, dedicated to AI/ML in aeronautics, which aligns with the objectives of JTC 21 on aeronautics, which aligns with the objectives of JTC 21 on AI trustworthiness
Conclusion
Compliance with AI standards is vital to ensure the ethical and secure development and use of AI technologies. The activities of standards-developing organizations, such as ISO, IEC, CEN, CENELEC, and ETSI, play a crucial role in setting the groundwork for AI standardization. These organizations are actively working on defining standards, guidance, and best practices to address the unique challenges and opportunities that AI presents.
It’s important to note that this article is based on the information provided by the European Union Agency for Cybersecurity (ENISA), which has been at the forefront of efforts to ensure the responsible and secure use of AI technologies.
If you’re looking to navigate the complexities of AI compliance and ensure your organization is fully aligned with the AI Act and cybersecurity standards, consider reaching out to ERS Consultancy. Our team of experts is well-versed in the ever-evolving world of AI regulation and can provide the guidance and support you need to meet compliance requirements. Don’t hesitate to get in touch if you have any doubts or questions regarding AI compliance – ERS Consultancy is here to help you navigate the AI landscape with confidence and peace of mind.
Contact us today to ensure your organization is fully aligned with the AI Act and cybersecurity standards. Your journey to AI compliance starts here.