![](https://crypto4nerd.com/wp-content/uploads/2024/03/1710571558_15EyQc-m3dOkCJyMbdLfL2w.jpeg)
- Combination of Weak Learners eXplanations to Improve Random Forest eXplicability Robustness(arXiv)
Author : Riccardo Pala, Esteban García-Cuesta
Abstract : The notion of robustness in XAI refers to the observed variations in the explanation of the prediction of a learned model with respect to changes in the input leading to that prediction. Intuitively, if the input being explained is modified slightly subtly enough so as to not change the prediction of the model too much, then we would expect that the explanation provided for that new input does not change much either. We argue that a combination through discriminative averaging of ensembles weak learners explanations can improve the robustness of explanations in ensemble methods.This approach has been implemented and tested with post-hoc SHAP method and Random Forest ensemble with successful results. The improvements obtained have been measured quantitatively and some insights into the explicability robustness in ensemble methods are presented.
2.Copula Approximate Bayesian Computation Using Distribution Random Forests (arXiv)
Author : : George Karabatsos
Abstract : This paper introduces a novel Approximate Bayesian Computation (ABC) framework for estimating the posterior distribution and the maximum likelihood estimate (MLE) of the parameters of models defined by intractable likelihood functions. This framework can describe the possibly skewed and high dimensional posterior distribution by a novel multivariate copula-based distribution, based on univariate marginal posterior distributions which can account for skewness and be accurately estimated by Distribution Random Forests (DRF) while performing automatic summary statistics (covariates) selection, and on robustly estimated copula dependence parameters. The framework employs a novel multivariate mode estimator to perform for MLE and posterior mode estimation, and provides an optional step to perform model selection from a given set of models with posterior probabilities estimated by DRF. The posterior distribution estimation accuracy of the ABC framework is illustrated through simulation studies involving models with analytically computable posterior distributions, and involving exponential random graph and mechanistic network models which are each defined by an intractable likelihood from which it is costly to simulate large network datasets. Also, the framework is illustrated through analyses of large real-life networks of sizes ranging between 28,000 to 65.6 million nodes (between 3 million to 1.8 billion edges), including a large multilayer network with weighted directed edges.