TMCnet News
AIME: Toward More Intuitive Explanations of Machine Learning Predictions - A Breakthrough from Researchers of Musashino UniversityTOKYO, April 23, 2024 /PRNewswire/ -- Machine learning (ML) and artificial intelligence (AI) have emerged as key technologies for decision-making in various fields such as automated driving, medical diagnostics, and finance. Current ML and AI models have even surpassed human intellectual capabilities in some regards. Therefore, it is important to understand how these technologies predict and estimate results and which features of the data affect their outcomes the most in an intuitive and comprehensible way. To meet these demands, interpretive ML algorithms and explainable AI (XAI) models, such as local interpretable model-agnostic explanations (LIME) and shapely additive explanations (SHAP), have been developed. These methods construct and observe an approximate simple model and attempt to explain how different features in the dataset contribute to their predictions and estimations. However, existing interpretive ML and XAI solve the forward calculation to derive an explanation for the black box, and it can sometimes be difficult to derive an explanation. Against this backdrop, Associate Professor Takafumi Nakanishi from the Department of Data Science at Musashino University, Japan, has now introduced an innovative approximate inverse model explanations (AIME) approach that is meant to provide more intuitive explanations. Dr. Nakanishi explains: "AIME essentially reverse-calculates AI decisions." His study was published in Volume 11 of IEEE Access on September 11, 2023, and summarized in an engaging video. AIME takes a unique approach, estimating and constructing an inverse operator for an ML or AI model. This operator helps estimate the significance of both local and global features on the model's outputs. Moreover, this method also introduces a representative similarity distribution plot that uses special representative estimation instances to identify how a particular prediction is related to other instances, providing insights into the complexity of the target dataset distribution. The study found that explanations obtained from AIME were both relatively simpler and more intuitive than those provided by LIME and SHAP. It proved effective for a wide variety of datasets, including tabular and handwritten images of numbers and text. Additionally, the similarity distribution plot provided an objective visualization of the model's complexity. The experiments also revealed that the AIME approach was more robust in handling multicollinearity. "It is particularly relevant in scenarios like explaining AI-generated art. Furthermore, self-driving cars will soon have self-driving recorders like those in airplanes, which could be analyzed by AIME to ascertain the cause of accident through post-accident analysis," remarks Dr. Nakanishi. This development can bridge the gap between humans and AI, fostering deeper trust. Reference Title of original paper: Approximate Inverse Model Explanations (AIME): Unveiling Local and Global Insights in Machine Learning Models Journal: IEEE Access DOI: https://doi.org/10.1109/ACCESS.2023.3314336 About Associate Professor Takafumi Nakanishi Media Contact: View original content to download multimedia:https://www.prnewswire.com/news-releases/aime-toward-more-intuitive-explanations-of-machine-learning-predictions--a-breakthrough-from-researchers-of-musashino-university-302124189.html SOURCE Musashino University |