Sažetak (engleski) | Well-evidenced advances of data-driven complex machine learning approaches emerging within the so- called second wave of artificial intelligence (AI) fostered the exploration of possible AI applications in various domains and aspects of human life, practices, and society. Most of the recent success in AI comes from the utilization of representation learning with end-to-end trained deep neural network models in tasks such as image, text, and speech recognition or strategic board and video games. By enabling automatic feature engineering, deep learning models significantly reduce the reliance on domain-expert knowledge, outperforming traditional methods based on handcrafted feature engineering and achieving performance that equals or even supersedes humans in some respects. Despite the outstanding advancements and potential benefits, the concerns about the black-box nature and the lack of transparency behind the behavior of deep learning based AI solutions have hampered their further applications in our society. To fully trust, accept, and adopt newly emerging AI solutions in our everyday lives and practices, we need human- centric explainable AI (HC-XAI) that can provide human- understandable interpretations for their algorithmic behavior and outcomes—consequently enabling us to control and continuously improve their performance, robustness, fairness, accountability, transparency, and explainability throughout the entire lifecycle of AI applications. Following this motivation, the recently emerging trend within diverse and multidisciplinary research communities is based on the exploration of human-centric AI approaches and the development of contextual explanatory models propelling the symbiosis of human intelligence (HI) and artificial intelligence (AI), which forms the basis of the next (third) wave of AI. |