Smart Glasses and Stick-Enhanced Assistive System with Indoor Localization for Blind Students
Supervisor Name
Ayman Wazwaz
Supervisor Email
aymanw@ppu.edu
University
Palestine Polytechnic University
Research field
Computer Science
Bio
Dr. Ayman Wazwaz, Lecturer at the Computer Engineering Department, Palestine Polytechnic University. Holds a PhD in Internet of Things, a Master’s in Telematics, and a Bachelor's in Computer Engineering. Research interests include edge computing, embedded systems, and computer networks.
Description
Introduction: Ensuring accessible educational environments requires technologies that enable visually impaired students to navigate safely and independently. While outdoor navigation has been widely addressed through the Global Positioning System, indoor navigation remains a major challenge due to the absence of reliable positioning signals and the dynamic nature of indoor environments. Traditional assistive tools such as white canes provide limited situational awareness and cannot detect obstacles at different elevations or interpret spatial information. Recent advances in computer vision, embedded artificial intelligence, and IoT technologies offer new possibilities for intelligent assistive systems capable of perceiving and interpreting indoor environments in real time. This research proposes a multimodal assistive navigation system that integrates AI-powered smart glasses, a sensor-enhanced smart stick, and a Wi-Fi–based indoor localization subsystem. By combining visual perception, obstacle detection, and location awareness, the system aims to support safer and more independent indoor navigation for blind students in educational environments. Research Objectives: The project aims to investigate how multimodal sensing and edge AI can improve assistive indoor navigation. The main objectives include: 1. Investigating the integration of computer vision, ultrasonic sensing, and Wi-Fi localization to enhance environmental perception. 2. Designing a context-aware indoor navigation framework for educational environments. 3. Exploring the feasibility of Wi-Fi fingerprinting for room-level indoor localization. 4. Implementing lightweight AI models on embedded hardware for real-time perception. 5. Evaluating multimodal feedback mechanisms combining auditory guidance and tactile feedback. 6. Studying voice-based interaction for intuitive communication between the user and the system. 7. Developing and evaluating a prototype assistive navigation system for visually impaired students.
