End-to-End Deep Learning System for Sign Language and Emotion Classification
Supervisor Name
Hadi Khalilia
Supervisor Email
h.khalilia@ptuk.edu.ps
University
Palestine Technical University Kadoorie
Research field
Computer Science
Bio
Hadi Khalilia is a faculty member in the field of Computer Science and Artificial Intelligence, with a strong interdisciplinary background spanning computational linguistics, natural language processing (NLP), information retrieval, and machine learning. He serves as Head of Computer Science at the College of Information Technology and Artificial Intelligence at Palestine Technical University (PTUK) and is a research group member of the KnowDive Research Group at the University of Trento, Italy. He holds a Ph.D. in Information and Communication Technology from the University of Trento, where his research focused on developing language resources and addressing lexical gaps in natural languages. His research interests include computational linguistics, natural language processing, language diversity, information retrieval, and machine learning. He has published extensively in international journals and conferences, including Frontiers in Psychology, LREC, ICNLSP, and ACL workshops. His work has contributed significantly to advancing multilingual lexicons, improving Arabic WordNet quality, and studying lexical diversity across languages and dialects. In addition, Dr. Khalilia has held several academic leadership and administrative roles.
Description
This project proposes the development of an intelligent artificial intelligence–based platform that translates Palestinian Sign Language (PSL) while simultaneously recognizing and conveying the emotional content embedded within signed communication. While most existing sign language translation systems focus solely on converting gestures into text or speech, they largely neglect the emotional expressions conveyed through facial movements, body posture, and signing dynamics, which are essential components of natural human interaction. The proposed system leverages deep learning techniques in computer vision and emotion recognition to analyze short recorded video clips (1–5 seconds) of sign language. These controlled, non–real-time recordings allow for accurate annotation, reliable emotion labeling, and effective model training. The system is designed as a modular framework consisting of video preprocessing, sign gesture recognition, emotion detection, and a unified bilingual output interface that presents both linguistic meaning and emotional context in a synchronized manner. By integrating emotion-aware sign language translation, the project aims to enhance communication quality between Deaf and hearing individuals, making interactions more natural, empathetic, and human-centered. Focusing on Palestinian Sign Language within the Arabic context, the project also contributes to documenting an underrepresented sign language and supports the development of inclusive AI technologies. The resulting platform has the potential to be extended to applications in education, healthcare, and social services, promoting digital inclusivity and accessibility for the Deaf community.
