Standardization of Artificial Intelligence

Written by Daniel Tokody, Laszlo Ady, Dalibor Dobrilovic, Francesco Flammini, and Andrea Gaglione

Engineers, like the authors, tend to approach the AI standardization problem from the side of technology. Their first question to ask is whether, in terms of standardization, artificial intelligence should be considered as software or a tangible product.


Standards exist in a de facto way, which means that they are maintained because they bring benefits, or they may be de jure, that is legally binding by contracts and documents. States themselves are required to comply with the standards adopted by their own official standardization bodies. Retaining and respecting them is a precondition for entering cooperation in certain markets, with certain companies or groups. There are also open and copyrighted standards.

In what follows, we cover different aspects of standardization in the context of Artificial Intelligence, through a series of questions and respective answers.

 

What is the standardization process?

Standardization processes can traditionally be of three types.

In the committee-based model, several interested parties try to jointly develop a solution to a problem. A well-known example of a standard that emerged in a committee is the A-series of paper sizes.

The second way is market-based standardization, where different companies compete against each other until one of them becomes dominant and supersedes the others. The classic example of this was the VHS vs. Betamax battle.

The third way is government-based standardization, where the government chooses a solution and uses its hierarchical position to impose that on actors in the market. This is the case with the AI regulations in the EU.

 

What is the European direction? The product view.

“The EU safety framework already addresses the intended use and foreseeable (mis)use of products when placed on the market. This has led to the development of a solid body of standards in AI-enabled devices that is continuously being adapted in line with technological progress. The further development and promotion of such safety standards and support in EU and international standardisation organisations will help enable European businesses to benefit from a competitive advantage, and increase consumer trust.” [1]

“Trustworthy AI has three components: (1) it should be lawful, ensuring compliance with all applicable laws and regulations, (2) it should be ethical, demonstrating respect for, and ensure adherence to, ethical principles and values, and (3) it should be robust, both from a technical and social perspective, since, even with good intentions, AI systems can cause unintentional harm. Trustworthy AI concerns not only the trustworthiness of the AI system itself but also comprises the trustworthiness of all processes and actors that are part of the system’s life cycle.” [2]

The CE marking is an indication that a product complies with the requirements of a relevant Union legislation regulating the product in question. To affix a CE marking to high-risk AI systems, a provider shall undertake five steps:

  1. Determine whether the AI systems are classified as high-risk under the new AI regulations.
  2. Ensure that the design, development, and quality management systems are in compliance with the AI regulations.
  3. Follow a conformity assessment procedure to assess and document compliance.
  4. Affix the CE marking to the systems and sign a declaration of conformity.
  5. Place the product on the market or put it into service [3].

Establishing a trustworthy AI involves high-quality training, validation, and relevant and representative test data. It must have documentation and design logging features to ensure traceability, auditability, and transparency to provide users with information on how to use the system. It must allow human oversight, measures built into the system, or to be implemented by users. It must also ensure robustness, accuracy, and cybersecurity [3].

 

Can humans trust AI?

It is a far-reaching question, but research results are encouraging. According to Wang J. et al. [5] it depends on user satisfaction in AI features. They describe the AI Trust Core as four attributes, such as job efficiency, effectiveness, understanding, control, and data production.

 

What process has been identified?

The three different models of standardization are becoming increasingly mixed. Various trends in developing smart products, smart cities, smart industries, or in sustainability challenges show that actors from various sectors and countries come together to set standards and they all bring their own experience in standardization into this process. As a result, quite often two or even all three of these models are included in the same standardization process, which can lead to dynamic development with a high level of competition and interaction between different players.

In addition, these distributed systems and other complex software technologies, or the safety of Intelligent Autonomous Systems still pose many problems today [4].

 

Who is involved in this process?

A good example of the process of using a mixed model is the standardization of charging plugs for electric cars. In this case, several companies were active in the committees to agree on a common design. At the same time, some of them had already put their designs on the market and installed bases to strengthen their position in the discussions.  When it turned out that the European industry could not agree on a common design, the European Commission stepped in and used its hierarchical position to choose the present design as the common standard in Europe. We think the same will happen with AI standardization.

 

Summary and future research issues

Our research question focused on the current state of AI standardization. The status of standardization is imperfect and in progress. Many participants are involved in this work with many ideas in many ways. In addition, AI is still an emerging technology. The existing standards for functional safety, in particular ISO 26262, are not compatible with typical AI methods, such as machine learning. This means that more work needs to be done on the verification and validation of Artificial Intelligence. Especially, if they are to have control over safety-critical functions and equipment [4]. Therefore, it would be worthwhile to ensure open access to artificial intelligence standards. Still, the question remains whether AI will ever become a standardized product in the future.

 

 

 

References

  1. European Commission, ‘Artificial Intelligence for Europe’, COM(2018) 237 final. p. 20, 2018.
  2. High-Level Expert Group on Artificial Intelligence, ‘Ethics Guidelines for Trustworthy AI’. p. 41, 2019.
  3. L. Sioli, ‘Trustworthy AI: The EU’s New Regulation on a European Approach for Artificial Intelligence - Shaping Europe’s Digital Future’. [Online]. Available: https://www.youtube.com/watch?v=3AVt-jIekks.
  4. National Aeronautics and Space Administration, ‘NASA Software Safety Guidebook NASA’. p. 388, 2004.
  5. J. Wang and A. Moulden, ‘AI Trust Score: A User-Centered Approach to Building, Designing, and Measuring the Success of Intelligent Workplace Features’, in Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems, 2021, pp. 1–7.

 

 

 

This article was edited by Luis M. Fernandez-Ramirez

For a downloadable copy of the August 2021 eNewsletter which includes this article, please visit the IEEE Smart Cities Resource Center.

Tokody DV1
Daniel Tokody received his doctoral degree in Safety and Security Sciences at the Óbuda University. He is an Electrical Engineer (BSc and MSc). He is a member of the IEEE SMC Technical Committee on Homeland Security. Daniel does research in Safety Engineering, Electrical Engineering, and Railway Engineering. He is a foundation member and lead researcher at NextTechnologies Ltd. Complex Systems Research Institute. His areas of research interest include intelligent systems, intelligent cooperative systems development, and safety-critical systems.

IEEE Smart Cities Publications Journals and Magazines Special Issues

This web page displays the effort of IEEE Smart Cities Publications Committee in proposing and guest editing special issues for IEEE Journals and Magazines which is of interests to IEEE Smart Cities Community. Please click here to view.

Past Issues

To view archived articles, and issues, which deliver rich insight into the forces shaping the future of the smart cities. Older eNewsletter can be found here. To download full issues, visit the publications section of the IEEE Smart Cities Resource Center.