Domain Model Learning for Automated Planning

Tutorial at the AAMAS 2026 Conference — 3.5 hours of theory and hands-on practice

May 26th, a.m., 2026

Coral Beach Hotel

About the Tutorial

Planning is a long-standing challenge for Artificial Intelligence, and the state-of-the-art approach to address it requires formal domain models that capture key aspects of the environment. Obtaining such models is usually done by a human expert, and is often a bottleneck and inhibitor to real-world deployment of planning technology.

This tutorial covers the literature for automatically learning planning domain models from observations. This includes methods for learning actions' preconditions and effects, identifying useful state representations, and active model acquisition.

Participants will gain a solid understanding of the core concepts in this active area of research, learn how it connects to Model-Based Reinforcement Learning, and become familiar with the state of the art. The tutorial also includes a hands-on session with open-source tools for learning and evaluating planning domain models.

Is this tutorial for me?

If you are interested in Automated Planning, Knowledge Representation, or Reinforcement Learning, the answer is definitely yes! Learning planning domain models is a growing research area related to all these fields. This tutorial is designed for senior and junior researchers as well as practitioners, and is designed to equip you with the knowledge and resources to bring domain model learning into your own work.

What background knowledge is required?

Only a very basic background in automated planning is required; no more than the level of an undergraduate Introduction to AI class.

Tutorial Schedule (tentative)

TimeSessionSpeakerSlides
08:30–09:15Introduction & Domain Learning BasicsRoni SternTBA
09:15–09:45Learning State AbstractionsRoni SternTBA
9:45-10:15Coffee Break ☕
10:15-11.00Offline Learning Action ModelsLeonardo LamannaTBA
11.00-11.45Hands-on SessionLeonardo LamannaTBA
11.45-12.30Active Learning and Open ChallengesRoni SternTBA

Organizers

Organizer 1

Roni Stern

Professor and head of the Software Engineering program at Ben Gurion University of the Negev, Israel. Among other roles in the AI community, he served as the President of the Symposium on Combinatorial Search (SoCS), and the co-Program Chair for the International Conference on Automated Planning (ICAPS).

Organizer 3

Leonardo Lamanna

Postdoctoral Researcher at Fondazione Bruno Kessler (FBK), Italy, working on integrating learning and symbolic planning for agents operating in unknown environments. He received his PhD in 2023 at the University of Brescia; his PhD thesis was recognized the "Marco Cadoli" 2023 award from the Italian Association for Artificial Intelligence, and published by IOS press "Frontiers in Artificial Intelligence and Applications" book series.

Resources

Misc:


(Some) References

  • Arora A., Fiorino H., Pellier D., Etivier M. & Pesty S. A review of learning planning action models. Knowledge Engineering Review, 2018.
  • Asai M. & Muise C. Learning Neural-Symbolic Descriptive Planning Models via Cube-Space Priors: The Voyage Home (to {STRIPS}). IJCAI, 2020.
  • Brendan J., Le H. & Stern R. Safe Learning of Lifted Action Models. KR, 2021.
  • Lamanna L., Saetti A., Serafini L., Gerevini A. & Traverso P. Online Learning of Action Models for PDDL Planning. IJCAI, 2021.
  • Callanan E., De Venezia R., Armstrong V., Paredes A., Chakraborti T. & Muise C. MACQ: A Holistic View of Model Acquisition Techniques. ICAPS KEPS, 2022.
  • Brendan J. & Stern R. Learning Probably Approximately Complete and Safe Action Models for Stochastic Worlds. AAAI, 2022.
  • Asai M. and Kajino H. and Fukunaga A. and Muise C. Classical planning in deep latent space. JAIR, 2022.
  • Mordoch A., Brendan J. & Stern R. Learning Safe Numeric Action Models. AAAI, 2023.
  • Mordoch A., Stern R. & Brendan J. Safe Learning of PDDL Domains with Conditional Effects. ICAPS, 2024.
  • Lamanna L. & Serafini L. Action model learning from noisy traces: a probabilistic approach. ICAPS, 2024.
  • Le H., Brendan J. & Stern R. Learning Safe Action Models with Partial Observability. AAAI, 2024.
  • Lamanna L., Serafini L., Saetti A., Gerevini A. & Traverso P. Lifted action models learning from partial traces. AIJ, 2025.

Acknowledgements