Sparkle Planning Challenge 2019

The Sparkle Planning Challenge 2019 is a novel competitive event that aims to assess the state of the art in solving the planning benchmarking problems, leveraging cutting-edge automatically constructed planner selectors, and to quantify contributions of individual planners. The Sparkle Planning Challenge 2019 will follow the the competition mechanisms used by the Sparkle SAT Challenge 2018.

It is well established that the state of the art for planning is not defined by a single planner, but rather by a set of non-dominated planners with complementary strengths. A prominent way of exploiting this performance complementarity is to leverage machine learning techniques to build effective automatic planning selectors on top of state-of-the-art planners. The Sparkle Planning Challenge automatically combines all participating planners into a state-of-the-art planning selector, and assesses the contribution of each participating planner to the performance of that planning selector. It thus encourages submitters to make substantial contributions to the state of the art in planning as realised by this selector, by maximising the contribution to overall selector performance due to their planner.



Planner developers will submit their planners and supporting information via e-mail (see detailed information for Planner Submission below). During a period of 10 days before the submission deadline, every 24-48 hours, a state-of-the-art selector will be constructed, based on all planners available at that time, using a set of training instances drawn from previous International Planning Competitions. The contributions to the performance of this selector, on the training set, will be published in a leader board accessible to all participants. During this phase, participants can resubmit their planners as often as they desire.

Planner submissions need to list all authors, and no planner author can be involved in more than three separate submissions (not counting resubmissions, which replace previously submitted versions of a planner). Please note that submitting multiple planners that perform well on very similar types of planning benchmarking instances can be expected to result in poor performance in the Sparkle challenge, since planners are assessed based on their marginal contribution to overall selector performance.

In Sparkle Planning Challenge 2019, the performance for planners will be measured using the runsolver tool.

The competition will be run on the Sparkle platform, a PbO-based problem-solving platform designed to enable the wide-spread and effective use of programming by optimisation (PbO) techniques for improving the state of the art in solving a broad range of prominent AI problems, including SAT and AI Planning. The Sparkle platform is being developed by the ADA Research Group, Leiden Institute of Advanced Computer Science (LIACS), Leiden University. The Sparkle challenge will be run on a large, state-of-the-art, Linux-based compute cluster at the Leiden Institute of Advanced Computer Science (LIACS).

The organisers of the Sparkle challenge will not participate in the challenge, nor in any track of the International Planning Competition 2019.

There is a mailing list for Sparkle Planning Challenge 2019 at



Participants in the Sparkle challenge will be awarded slices of a single gold medal; the size of each slice is proportional to the magnitude of the marginal contribution made by the respective planner to the performance of the automatically constructed selector built from all participating planners on the same test set of benchmarking instances used in the International Planning Competition 2019. Any planner that returns an incorrect solution to any training or test instance will be disqualified and removed from the set of planners used in the final evaluation. Only open-source planners are allowed to be submitted to the Sparkle challenge.

Important Dates

Planner Submission

The competitors must submit the source code of their planners that will be run by the organizers on the actual competition domains/problems, unknown to the competitors until this time. This way no fine-tuning of the planners will be possible.

The submission is done by email. A zip file should be sent to Chuan ( and Mauro (, and structured as follows.

If you are submitting more than one planner, please create a different zip file for each planner.

Bug fix policy

In some cases, we will offer the opportunity to fix bugs that arise during the evaluation period, but any changes after the submission deadline will be strictly limited to bugfixes only. We will use a diff tool to check that patches don't contain new features or parameter tuning, and will reject patches that don't look like minimal changes to fix bugs. It is your responsibility to provide patches that are easy to verify with a diff tool. We reserve the right to reject changes for which the only-bugfixes rule is unnecessarily hard to check (e.g. because you reformatted the whole code).