Domain Model Learning in AI Planning

Tutorial at the AAAI 2026 Conference — 3.5 hours of theory and hands-on practice

January 21st 2026

Room TBD at TBD Floor, Singapore EXPO

About the Tutorial

Planning is a long-standing challenge for Artificial Intelligence, and the state-of-the-art approach to address it requires formal domain models that capture key aspects of the environment. Obtaining such models is usually done by a human expert, and is often a bottleneck and inhibitor to real-world deployment of planning technology.

This tutorial covers the literature for automatically learning planning domain models from observations. This includes methods for learning actions' preconditions and effects, identifying useful state representations, and online model acquisition.

Participants will gain a solid understanding of the core concepts in this active area of research, learn how it connects to Model-Based Reinforcement Learning, and become familiar with the state of the art. The tutorial also includes a hands-on session with open-source tools for learning and evaluating planning domain models.

Is this tutorial for me?

If you are interested in Automated Planning, Knowledge Representation, or Reinforcement Learning, the answer is definitely yes! Learning planning domain models is a growing research area related to all these fields. This tutorial is designed for senior and junior researchers as well as practitioners, and is designed to equip you with the knowledge and resources to bring domain model learning into your own work.

What background knowledge is required?

Only a very basic background in automated planning is required; no more than the level of an undergraduate Introduction to AI class.

Tutorial Schedule (tentative)

TimeSessionSpeakerSlides
08:30–09:00Introduction & Domain Learning BasicsRoni SternTBD
09:00–10:00Offline Learning Domain ModelsLeonardo LamannaTBD
10:00–10:30Learning State AbstractionsChristian Muise (or Roni Stern)TBD
10:30–10:45Coffee Break ☕
11:00–11:30Hands-on SessionChristian Muise (or Leonardo Lamanna)TBD
11:30–12:00Online Learning and Open ChallengesRoni SternTBD

Organizers

Organizer 1

Roni Stern

Professor and head of the Software Engineering program at Ben Gurion University of the Negev, Israel. Among other roles in the AI community, he served as the President of the Symposium on Combinatorial Search (SoCS), and the co-Program Chair for the International Conference on Automated Planning (ICAPS).

Organizer 2

Christian Muise

Assistant Professor at Queen's University in Kingston, Ontario, Canada. His research focuses on model understanding in the field of automated planning, and his work has been recognized with both a Distinguished Dissertation Award and an Influential Paper Award from the International Conference on Automated Planning and Scheduling (ICAPS). He also sits on the ICAPS council after serving as Program Co-Chairs for ICAPS 2024.

Organizer 3

Leonardo Lamanna

Postdoctoral Researcher at Fondazione Bruno Kessler (FBK), Italy, working on integrating learning and symbolic planning for agents operating in unknown environments. He received his PhD in 2023 at the University of Brescia; his PhD thesis was recognized the "Marco Cadoli" 2023 award from the Italian Association for Artificial Intelligence, and published by IOS press "Frontiers in Artificial Intelligence and Applications" book series.

Resources

Misc:


(Some) References

  • Arora A., Fiorino H., Pellier D., Etivier M. & Pesty S. A review of learning planning action models. Knowledge Engineering Review, 2018.
  • Asai M. & Muise C. Learning Neural-Symbolic Descriptive Planning Models via Cube-Space Priors: The Voyage Home (to {STRIPS}). IJCAI, 2020.
  • Brendan J., Le H. & Stern R. Safe Learning of Lifted Action Models. KR, 2021.
  • Lamanna L., Saetti A., Serafini L., Gerevini A. & Traverso P. Online Learning of Action Models for PDDL Planning. IJCAI, 2021.
  • Callanan E., De Venezia R., Armstrong V., Paredes A., Chakraborti T. & Muise C. MACQ: A Holistic View of Model Acquisition Techniques. ICAPS KEPS, 2022.
  • Brendan J. & Stern R. Learning Probably Approximately Complete and Safe Action Models for Stochastic Worlds. AAAI, 2022.
  • Asai M. and Kajino H. and Fukunaga A. and Muise C. Classical planning in deep latent space. JAIR, 2022.
  • Mordoch A., Brendan J. & Stern R. Learning Safe Numeric Action Models. AAAI, 2023.
  • Mordoch A., Stern R. & Brendan J. Safe Learning of PDDL Domains with Conditional Effects. ICAPS, 2024.
  • Lamanna L. & Serafini L. Action model learning from noisy traces: a probabilistic approach. ICAPS, 2024.
  • Le H., Brendan J. & Stern R. Learning Safe Action Models with Partial Observability. AAAI, 2024.
  • Lamanna L., Serafini L., Saetti A., Gerevini A. & Traverso P. Lifted action models learning from partial traces. AIJ, 2025.