Written by Wei Zhang
Artificial intelligence (AI) and intelligent systems are everywhere in Smart Cities. However, consumers and industry start raising questions and demands about the explainability and interpretability of AI and intelligent systems. Explainability and interpretability enable model/system inspection, validation, and optimization and, more importantly, they help gain confidence and trust from consumers and industry and facilitate final system deployment. In this special issue, we discuss the latest advancements of explainable AI and present several related articles from various perspectives.
Nowadays, most explainable AI research efforts tend to investigate the explainability and interpretability of the existing models and systems. However, explainability can be part of the discussion before the models and systems are generated. For example, data traceability is key to tell if a model or system is trained properly or has been compromised. In our first article “Enabling Interpretability in Smart Cities with Knowledge Graphs: Towards a Better Modelling of Consent,” A. Kurteva and A. Fensel present how the Semantic Web (and, specifically, knowledge graphs) can be used to provide explanations and interpretations for Smart Cities applications, especially for informed consent. Compared to numbers and text, a graph is a more intuitive and natural information carrier for humans. The nodes indicate the entities in a system and the edges tell the relationship between different entities. Popular tools from both academia and industry are introduced in the article and the tools are available for various sectors, like finance and legal. Overall, these tools enable full transparency of data sharing and delivery of better modeling.
While explainability often comes with AI, which usually produces models and systems with significant complexity, explainability shall not be an exclusive topic in the AI domain. In terms of a generalized concept of intelligent system, methodologies are available to mitigate the unexplainable AI models and systems, which are often black box in nature. One option is to adopt the model-based approach instead of a data-driven approach. Note that the model here is not equivalent to AI model, and is only a set of mathematical representation and formulations of the system dynamics. Our second article in this special issue is about another domain of Smart Cities: Communication. The title of the article is “Intelligent Deployment of Autonomous UAV Networks for Provisioning Communication Services”. The authors, X. Wang et al., consider the fact that communication coverage is not always available, especially for remote and rural areas. Setting up the (relatively) permanent telecom infrastructure in such areas can be costly with limited benefit and Unmanned Aerial Vehicles (UAVs) can form up a temporary network to provide communication services. However, challenges do exist and the key problems to be addressed include path planning, task scheduling, and service pricing. Instead of using AI algorithms for modelling and optimization, the problem is formulated based on physics’ laws and domain knowledge. Besides, the pricing scheme is derived by adopting economic theories. Such formulation and theories are solid based on established research works and can be justified. Meanwhile, the explainability and interpretability of the proposed solution come as a natural by-product.
Our third article, titled “Human AI Teaming for the Next Generation Smart City Healthcare Systems” is about smart healthcare and was authored by M. Abdur Rahman, et al. Healthcare is among the most cautious sectors for adopting AI technologies. A key reason is that the healthcare system has near-zero tolerance for inexplicability. Conditions must be fully understood, and solutions must be clear and correct. Existing AI solutions cannot easily meet those stringent requirements and, as a result, many health institutes and organizations are still reluctant to adopt AI technologies, even though those technologies have shown promising performance. The authors in this article share the insights that explanations shall be interactive. Given any system output like diagnosis conclusion, doctors can require the system to list out the symptoms and evidence in a user-friendly interface. The results can also be presented in different granularities, with key information highlighted and similar cases linked.
The last article of the special issue, titled “Explainable Machine Learning for Secure Smart Vehicles”, was authored by M. Scalas and G. Giacinto and is about another important domain of Smart Cities: Transportation. The authors argue that the focus of AI solutions shall not be effectiveness only, and other issues like security are also critical. This is especially true as transportation is a human-centric system and safety shall not be compromised. As a result, solution explainability and interpretability shall be incorporated into intelligent transportation systems. The objects can be vehicles as well as human subjects like drivers. For the former, the key motivation of having explainability is about safety and security. For the latter, the motivation is more about gaining the trust from customers for using the intelligent systems. Unique insight from this article is that explainability and interpretability are not always about the good sides. We shall also be cautious about the prevalence of the technology as well, i.e., what if attackers do reverse-engineering to steal the confidential information of the intelligent systems. Such issues worth receiving attention from both academia and industry in the future.
Overall, we argue that explainability and interpretability are key to various applications of Smart Cities. Unfortunately, they are still missing in most practices nowadays and shall be well addressed in the nearest future. Explanations and interpretations shall not be limited to AI models, and they can be explored in data, in formulation, in user-interface, and so on. Attention shall be paid to the application itself to fully understand the needs and limitations, and then suitable technologies can be figured out. While only four articles are included in this special issue, we believe the discussion can be interesting and the insights are diverse. In the future, we look forward to seeing more works for explainability and interpretability in a wide spectrum of smart city applications.
This article was edited by Aris Gkoulalas-Divanis