Researchers develop new method to improve interpretability and trust in AI models for industry

The findings demonstrated the effectiveness of this innovation through two real-world case studies: a sulphur recovery unit and a milling process in the cement industry.

SF
Sara Machado - FCTUC
Dt
Diana Taborda (EN transl.)
25 june, 2025≈ 3 min read

The rapid advancement of artificial intelligence (AI) solutions has brought a major challenge to the forefront: building human trust in high-performance models, particularly due to difficulties in understanding how they make decisions. This issue is especially critical in industrial environments, where systems must be not only accurate but also transparent and interpretable to ensure safe and effective operations.

A team of researchers from the Faculty of Sciences and Technology at the University of Coimbra (FCTUC) has developed an innovative approach to extracting interpretable knowledge from complex AI models, thereby making them more user-friendly for the industry sector. The method utilizes fuzzy logic systems to represent industrial process dynamics in a straightforward yet effective manner.

"This method was developed around a 'teacher-student' architecture (knowledge distillation), where a complex model (NFN-LSTM), which combines long short-term memory (LSTM) neural networks with fuzzy logic, serves as a reference to train a simpler, more interpretable model (NFN-MOD). This simpler model uses delayed input functions to simulate the temporal memory of the process, balancing performance with understandability,” explains Jorge S. S. Júnior, a doctoral student at the FCTUC Department of Electrical and Computer Engineering.

The research team demonstrated the effectiveness of their approach through two real-world case studies: a sulphur recovery unit and a milling process within the cement industry. In both cases, the NFN-MOD model successfully replicated the behaviour of the teacher model with high accuracy and provided clear explanations of the factors driving critical events such as spikes in harmful gas emissions or fluctuations in waste concentration.

"Furthermore, the model introduces a novel form of contextual analysis, enabling operators to understand different industrial scenarios better and support decision-making processes. The researchers conclude that this method promises to significantly increase trust in AI systems and improve process control in challenging industrial settings.

This work was developed as part of Jorge S. S. Júnior's doctoral thesis, supervised by Jérôme Mendes of the Centre for Mechanical Engineering, Materials and Processes (CEMMPRE) and co-supervised by Cristiano Premebida of the Institute of Systems and Robotics (ISR). The project also involved collaboration with Francisco Souza from imec-NL, OnePlanet Research Center, in the Netherlands.

The scientific article “Distilling Complex Knowledge Into Explainable T–S Fuzzy Systems”, published in the journal IEEE Transactions on Fuzzy Systems, is available here. This publication was highlighted in the IEEE Computational Intelligence Society (CIS) Newsletter as a 'Research Frontier'.