Guidelines and principles of trustworthy AI should be adhered to in practice during the development of AI systems. This work suggests a novel information theoretic trustworthy AI framework based on the hypothesis that information theory enables taking into account the ethical AI principles during the development of machine learning and deep learning models via providing a way to study and optimize the inherent tradeoffs between trustworthy AI principles. Under the proposed framework, a unified approach to ``privacy-preserving interpretable and transferable learning is considered to introduce the information theoretic measures for privacy-leakage, interpretability, and transferability. A technique based on variational optimization, employing emph{conditionally deep autoencoders}, is developed for practically calculating the defined information theoretic measures for privacy-leakage, interpretability, and transferability.