Abstract
University teaching of computer science, and more specifically programming, faces persistent challenges in terms of deep understanding and abstract reasoning. Intelligent tutors based on large language models (LLMs) offer personalized support but often remain intrusive by proposing final solutions to problems. This article proposes applying the “what-if” concept from UX design—generating hypothetical scenarios to challenge assumptions—in order to encourage proactive exploration and counterfactual reasoning. A conceptual framework is presented, including prompt patterns, interactive interface, and exploration loop. A prospective discussion addresses implementation in a university context: pedagogical integration, ethical and technical challenges, and expected benefits in terms of engagement and comprehension. This work remains a theoretical contribution with practical implications for the evolution of computer science education with AI.
Keywords: AI in education; intelligent tutor; what-if scenarios; UX design; algorithms and data structures; exploratory learning
References
- Bruner JS. “The act of discovery”. Harvard Educational Review 31.1 (1961): 21-32.
- Bunay Guisnan P, Lara JA and Romero C. “Counterfactual explanations in education: A systematic review”. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery (2025).
- Chi MTH., et al. “Self explanations: How students study and use examples in learning to solve problems”. Cognitive Science 13.2 (1989): 145-182.
- Jacobsen LJ and Weber KE. “The promises and pitfalls of large language models as feedback providers: A study of prompt engineering and the quality of AI driven feedback”. AI (2025).
- Jamali H., et al. “AI-powered adaptive learning interfaces: a user experience study in higher education”. Front. Comput. Sci 7 (2025).
- Jamali H., et al. “AI powered adaptive learning interfaces: A user experience study in education platforms”. Frontiers in Computer Science (2025).
- Kazemitabaar M., et al. “CodeAid: Evaluating a classroom deployment of an LLM based programming assistant that balances student and educator needs”. ACM (2024).
- Kolb DA. “Experiential learning: Experience as the source of learning and development”. Englewood Cliffs, NJ : Prentice-Hall (1984).
- Kulik JA and Fletcher JD. “Effectiveness of intelligent tutoring systems: A meta analytic review”. Review of Educational Research 86.1 (2016): 42-78.
- Metcalfe J. “Learning from errors”. Annual Review of Psychology 68 (2017): 465-489.
- Munoz A. “Human-centered AI for student engagement and academic efficiency”. Issues in Information Systems 26.1 (2025): 324-337.
- Nguyen H, Stott N and Allan V. “Comparing feedback from large language models and instructors: Teaching computer science at scale”. Proceedings of the Eleventh ACM Conference on Learning @ Scale (2024): 335-339.
- Perkins DN and Salomon G. “Transfer of learning”. In International Encyclopedia of Education 2nd ed 2 (1992): 6452-6457.
- Pinkwart N. “Another 25 years of AIED? Challenges and opportunities for intelligent educational technologies of the future”. International Journal of Artificial Intelligence in Education 26 (2016): 771-783.
- Prather J., et al. “Leveraging LLMs for Personalized Support in Programming Projects”. Proceedings of the ACM on Human-Computer Interaction (2025).
- Quan X., et al. “Verification and refinement of natural language explanations through LLM-symbolic theorem proving”. arXiv preprint arXiv:2405.01379 (2024).
- Rumelhart DE, Hinton GE and Williams RJ. “Learning representations by back-propagating errors”. Nature 323.6088 (1986): 533-536.
- Song P, Yang K and Anandkumar A. “Lean Copilot: Large language models as copilots for theorem proving in Lean”. arXiv (2024).
- Strielkowski W., et al. “AI-driven adaptive learning for sustainable educational transformation”. Sustain. Dev 33 (2025): 1921-1947.
- Topali P., et al. “Pedagogical considerations in the automation era: A systematic literature review of AIEd in K-12 authentic settings” (2025).
- Topali P., et al. “Designing human centered learning analytics and artificial intelligence in education solutions: A systematic literature review”. Behaviour & Information Technology (2025).
- VanLehn K, Jones RM and Chi MTH. “A model of the self explanation effect”. The Journal of the Learning Sciences 2.1 (1992): 1-59.
- Zimmerman BJ. “Attaining self-regulation: A social cognitive perspective”. In M. Boekaerts, P. R. Pintrich, & M. Zeidner (Eds.), Handbook of Self-Regulation. Academic Press (2000): 13-39.
End Notes
- The LTI standard enables secure information exchange between your LMS and an external learning tool: https://moodle.com/fr/nouvelles/quest-ce-que-lti-et-comment-il-peut-ameliorer-votre-ecosysteme-dapprentissage/.
- Scaffolding involves teaching a concept in a different subject, such as teaching math in English, as is done in European classrooms.