In order to develop machine learning and deep learning models that take into account the guidelines and principles of trustworthy AI, a novel information theoretic approach is introduced in this article. A unified approach to privacy-preserving interpretable and transferable learning is considered for studying and optimizing the trade-offs between the privacy, interpretability, and transferability aspects of trustworthy AI. A variational membership-mapping Bayesian model is used for the analytical approximation of the defined information theoretic measures for privacy leakage, interpretability, and transferability. The approach consists of approximating the information theoretic measures by maximizing a lower-bound using variational optimization. The approach is demonstrated through numerous experiments on benchmark datasets and a real-world biomedical application concerned with the detection of mental stress in individuals using heart rate variability analysis.
G. Fant. Applied Mathematics and Sciences: An International Journal (MathSJ) Vol.11, No.3, September 2024, 3 (3):
9 - 21(September 2011)cited By (since 1996) 0; Conference of 1st International Conference on Design, User Experience and Usability: Theory, Methods, Tools and Practice, DUXU 2011, Held as Part of 14th International Conference on Human-Computer Interaction, HCI International 2011; Conference Date: 9 July 2011 through 14 July 2011; Conference Code: 85648.
S. Ärtyom M. Grigoryan, Aparna John. Applied Mathematics and Sciences: An International Journal (MathSJ ), Vol. 4, No. 1/2, June 2017, 4 (1/2):
1-16(July 2013)org / 10. 6084 / m 9. figshare. 1386612.