Many machine learning models (e.g. deep neural networks and tree boosting machines) are used as block-boxes. Although those models can achieve high accuracy in many real-world problems, the interpretability limits the applicability of those machine learning models to high-risk applications (e.g., healthcare and finance). To allow machine learning to be used in more applications, understanding the predictive models is crucial. Understanding the machine learning models also assists model debugging to achieve higher quality models. This project aims to research on explainable/interpretable AI which attempts to understand how machine learning and deep learning models behave.