Written by Pao-Ann Hsiung, Sapdo Utomo, John A, Adarsh Rouniyar, Hsiu-Chun Hsu, GuoHao Jiang, Chang ChunHao, and Tang Kai Chun
The growth of the AI model is not commensurate with public confidence. Some AI models contain bias or discrimination, which can be detrimental to some decision-making processes or populations. Researchers are currently attempting to tackle issues with credibility. Smart cities and artificial intelligence have been intertwined for many years, but smart citizens are concerned about the system's privacy protections. Therefore, if a system's credibility cannot be enhanced, it is a problem.
These issues can be resolved by the development of a trustworthy AI system. In this article, eight criteria for trustworthy AI systems are discussed. According to the community, federated learning may solve privacy difficulties. Due to the fairness metrics, federated clients with disparate data may be excluded from the aggregate process. Consequently, a new global model will employ only trustworthy data and models. These two technologies could enable a smart city application that is sustainable and their applications for smart cities will flourish if all stakeholders have faith in the system.
Smart cities have been evolving worldwide for more than a decade . Many nations are building or improving their own smart cities. AI technologies are linked to smart city applications, yet trust in AI systems is still contested . Smart cities cannot reach their full potential until key stakeholders trust them. In some previous applications, AI systems exhibited bias against marginalized groups . Deep learning is always called a "black box" model since users do not comprehend why it produces that output . System robustness and privacy protection remain unanswered in some smart city applications. Researchers suggested trustworthy AI (TAI) and federated learning (FL) to address these concerns.
Trustworthy AI (TAI)
TAI solutions in academia or industry must include legal, social, ethical, public opinion, and environmental issues. Many TAI recommendations, ideas, and toolkits haven't been used on the ground because people don't have enough information, skills, or resources .
To overcome the aforementioned difficulties in the introduction, TAI systems must meet certain requirements. Fairness-enabled systems should address AI-induced discrimination. A fair AI system requires good data governance throughout the collection, cleaning, and preprocessing before model training. This process should address privacy, sensitive data, and data balance concerns . Good data governance helps with traceability and transparency. However, to help stakeholders understand AI output, an explainable AI approach should be devised. Explainable AI would boost transparency and stakeholder trust in AI systems, making them more accountable . With understandable output, tracing problems when a decision or prediction goes wrong is easier. Stakeholders will also gain more control over the systems as well as autonomy over when and why they should either trust the systems or vice versa. Robustness ensures system safety and robust systems can detect, defend, and mitigate threats. This requires adversarial model training .
As summarized, the eight criteria of TAI are: (a) explainability; (b) transparency and traceability; (c) privacy and data governance; (d) fairness and non-discrimination; (e) safety; (f) autonomy and control; (g) common good and well-being; and (h) accountability and communication.
Federated Learning (FL)Federated Learning is called "privacy by design." As TAI also considers privacy protection, so what distinguishes it from FL? TAI might send all data to the server, but FL applications save data in client silos . FL ensures data privacy by not exchanging data between clients or servers. TAI's data governance will safeguard privacy. TAI and FL improve data security and privacy. Why has FL improved security? A centralized system's data will be compromised if it's attacked. Figure 1 shows that a decentralized system like FL will only affect the node that is being targeted because the FL server does not have any client data. FL may also use aggregation algorithms that assess data fairness for each client to make the global model fair. Thus, only clients with sufficient and balanced data could aggregate. It would also help clients with bad data get the same good model.
Figure 1: Comparison of Centralized Architecture and Federated Learning.
TAI and FL Integration Strategy
Figure 2: The strategy of integration between TAI and FL..
Figure 2 illustrates how to combine TAI and FL in sequential order. With this set of procedures, TAI and FL's integration meets the eight criteria. Finally, these requirements will aid in the development of more effective, sustainable smart city systems that benefit society (the common good and well-being). As Taiwan AI labs have demonstrated, these methods can be applied to the medical field  as well as other application domains, such as traffic management and environmental monitoring, on which our research team is currently concentrating .
This article showed how the integration of two crucial technologies, namely trustworthy artificial intelligence and federated learning, will aid in the development of sustainable smart cities. If all stakeholders have complete faith in the systems, smart city applications will yield their full benefits. Consequently, the common welfare and the prosperity of society will ultimately be realized.
- Chapter 1, What is Smart City? in Chun Sing Lai, Loi Lei Lai, and Qi Hong Lai, Smart Energy for Transportation and Health in a Smart City, IEEE Press / Wiley, Nov. 2022.
- M. T. Ribeiro, S. Singh, and C. Guestrin, “‘Why Should I Trust You?’: Explaining the Predictions of Any Classifier.” arXiv, Aug. 09, 2016. Accessed: Jun. 22, 2022. [Online]. Available: http://arxiv.org/abs/1602.04938
- V. S. Lokhande, A. K. Akash, S. N. Ravi, and V. Singh, “FairALM: Augmented Lagrangian Method for Training Fair Models with Little Regret,” in Computer Vision – ECCV 2020, Cham, 2020, pp. 365–381. doi: 10.1007/978-3-030-58610-2_22.
- K. Yang, K. Qinami, L. Fei-Fei, J. Deng, and O. Russakovsky, “Towards fairer datasets: filtering and balancing the distribution of the people subtree in the ImageNet hierarchy,” in Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, New York, NY, USA, Jan. 2020, pp. 547–558. doi: 10.1145/3351095.3375709.
- K. A. Crockett, L. Gerber, A. Latham, and E. Colyer, “Building Trustworthy AI Solutions: A Case for Practical Solutions for Small Businesses,” IEEE Trans. Artif. Intell., pp. 1–1, 2021, doi: 10.1109/TAI.2021.3137091.
- M. Janssen, P. Brous, E. Estevez, L. S. Barbosa, and T. Janowski, “Data governance: Organizing data for trustworthy Artificial Intelligence,” Gov. Inf. Q., vol. 37, no. 3, p. 101493, Jul. 2020, doi: 10.1016/j.giq.2020.101493.
- “Adversarial Robustness Toolbox (ART) v1.10.” Trusted-AI, May 11, 2022. Accessed: May 11, 2022. [Online]. Available: https://github.com/Trusted-AI/adversarial-robustness-toolbox
- M. Aledhari, R. Razzak, R. M. Parizi, and F. Saeed, “Federated Learning: A Survey on Enabling Technologies, Protocols, and Applications,” IEEE Access, vol. 8, pp. 140699–140725, 2020, doi: 10.1109/ACCESS.2020.3013541.
- S. Abdulrahman, H. Tout, H. Ould-Slimane, A. Mourad, C. Talhi, and M. Guizani, “A Survey on Federated Learning: The Journey From Centralized to Distributed On-Site Learning and Beyond,” IEEE Internet Things J., vol. 8, no. 7, pp. 5476–5497, Apr. 2021, doi: 10.1109/JIOT.2020.3030072.
- “Federated Learning - Taiwan Medical Imaging,” Apr. 25, 2022. https://www.taimedimg.tw/federated_learning_framework/ (accessed Nov. 09, 2022).
- S. Utomo, A. John, A. Rouniyar, H.-C. Hsu, and P.-A. Hsiung, “Federated Trustworthy AI Architecture for Smart Cities,” in 2022 IEEE International Smart Cities Conference (ISC2), Sep. 2022, pp. 1–7. doi: 10.1109/ISC255366.2022.9922069.
This article was edited by Qi Lai.