Designing AI-Resistant Exams for Computer Science Courses

Supervisor Name

Mamoun Nawahdah

Supervisor Email

mnawahdah@birzeit.edu

University

Birzeit University

Research field

Computer Science

Bio

Dr. Mamoun Nawahdah is an Assistant Professor of Computer Science at Birzeit University, Palestine. He earned his Ph.D. in Computer Science from the University of Tsukuba, Japan, where he was awarded the prestigious Japanese Government Scholarship (MEXT). His research focuses on human-computer interaction, educational technologies, and AI-driven learning systems. Dr. Nawahdah has published widely in international journals and conferences and actively contributes as a reviewer for leading venues in the field.

Abstract: With the rise of AI-powered tools capable of solving programming and theoretical problems, traditional assessments in computer science courses are becoming less effective in accurately measuring students' understanding and problem-solving skills. This research aims to develop techniques for designing exam questions that are resistant to AI solutions, ensuring a fair and effective evaluation of student knowledge. By analyzing AI capabilities and weaknesses, we propose novel assessment strategies that emphasize critical thinking, creativity, and personalized problem-solving. 1. Introduction The integration of AI tools such as ChatGPT, Copilot, and Codeium has significantly impacted the way students approach assignments and exams. While these tools enhance learning, they also present challenges in academic integrity and assessment validity. This study explores methods to design exam questions that AI finds difficult to solve while maintaining fairness and educational effectiveness. 2. Research Objectives • Identify types of exam questions that are challenging for AI models. • Develop AI-resistant question design strategies tailored to computer science courses. • Experimentally evaluate the effectiveness of these strategies by comparing AI performance with human student performance. • Propose a framework to assist educators in designing AI-resistant exams. 3. Literature Review • AI capabilities in solving programming and theoretical problems. • Existing techniques for designing AI-resistant assessments. • Best practices in evaluating students’ problem-solving abilities. 4. Methodology • AI Analysis: Assess AI’s ability to solve various types of programming and theoretical questions. • Question Design: Develop exam questions based on strategies such as: o Personalized problem variations. o Explain-your-answer questions. o Real-world contextual scenarios. o Project-based assessments. • Experimentation: Conduct controlled tests with students and AI models to evaluate the effectiveness of the designed questions. • Evaluation Metrics: Measure AI accuracy, student performance, and assessment reliability. 5. Expected Contributions • A structured framework for designing AI-resistant exams in computer science courses. • Insights into AI limitations and strategies for leveraging them in assessment design. • Recommendations for educators on maintaining assessment integrity in the age of AI. 6. Conclusion This research will provide educators with effective strategies to design exams that accurately assess student learning while mitigating AI-assisted cheating. By understanding AI’s strengths and weaknesses, we can create innovative assessment methods that promote genuine understanding and critical thinking. 7. References (To be compiled based on relevant studies and sources.)