The Sparkle Planning Challenge 2019 is a novel competitive event that aims to assess the state of the art in solving the planning benchmarking problems, leveraging cutting-edge automatically constructed planner selectors, and to quantify contributions of individual planners. The Sparkle Planning Challenge 2019 will follow the the competition mechanisms used by the Sparkle SAT Challenge 2018.
It is well established that the state of the art for planning is not defined by a single planner, but rather by a set of non-dominated planners with complementary strengths. A prominent way of exploiting this performance complementarity is to leverage machine learning techniques to build effective automatic planning selectors on top of state-of-the-art planners. The Sparkle Planning Challenge automatically combines all participating planners into a state-of-the-art planning selector, and assesses the contribution of each participating planner to the performance of that planning selector. It thus encourages submitters to make substantial contributions to the state of the art in planning as realised by this selector, by maximising the contribution to overall selector performance due to their planner.
News
- 18 June 2018: Challenge launched, initial website published.
Mechanics
Planner developers will submit their planners and supporting information via e-mail (see detailed information for Planner Submission below). During a period of 10 days before the submission deadline, every 24-48 hours, a state-of-the-art selector will be constructed, based on all planners available at that time, using a set of training instances drawn from previous International Planning Competitions. The contributions to the performance of this selector, on the training set, will be published in a leader board accessible to all participants. During this phase, participants can resubmit their planners as often as they desire.
Planner submissions need to list all authors, and no planner author can be involved in more than three separate submissions (not counting resubmissions, which replace previously submitted versions of a planner). Please note that submitting multiple planners that perform well on very similar types of planning benchmarking instances can be expected to result in poor performance in the Sparkle challenge, since planners are assessed based on their marginal contribution to overall selector performance.
In Sparkle Planning Challenge 2019, the performance for planners will be measured using the runsolver tool.
The competition will be run on the Sparkle platform, a PbO-based problem-solving platform designed to enable the wide-spread and effective use of programming by optimisation (PbO) techniques for improving the state of the art in solving a broad range of prominent AI problems, including SAT and AI Planning. The Sparkle platform is being developed by the ADA Research Group, Leiden Institute of Advanced Computer Science (LIACS), Leiden University. The Sparkle challenge will be run on a large, state-of-the-art, Linux-based compute cluster at the Leiden Institute of Advanced Computer Science (LIACS).
The organisers of the Sparkle challenge will not participate in the challenge, nor in any track of the International Planning Competition 2019.
There is a mailing list for Sparkle Planning Challenge 2019 at https://groups.google.com/d/forum/planningsparkle2019.
MORE DETAILS
- Single CPU core for each planner run
- 8GB memory limit for each planner run
- 5 minutes time limit for each planner run
- The cost of the discovered plan is ignored, only the CPU time to discover a plan is counted.
- If an invalid plan is returned, all tasks in the domain are counted as unsolved.
- If that happens in more than one domain, the entry is disqualified.
- PAR10 is used as a scoring scheme.
- Classical planning problems, as those used in the deterministic track of IPCs 2014 and 2018 will be used for training and benchmarking.
Prizes
Participants in the Sparkle challenge will be awarded slices of a single gold medal; the size of each slice is proportional to the magnitude of the marginal contribution made by the respective planner to the performance of the automatically constructed selector built from all participating planners on the same test set of benchmarking instances used in the International Planning Competition 2019. Any planner that returns an incorrect solution to any training or test instance will be disqualified and removed from the set of planners used in the final evaluation. Only open-source planners are allowed to be submitted to the Sparkle challenge.
Important Dates
- 30th November 2018: Benchmark submission deadline
- 14th March 2019: Planner submission opens
- 18th March 2019: Leaderboard opens
- 12th April 2019: Planner submission deadline and Leaderboard ends
- July 2019: Announcement of results at ICAPS
Planner Submission
The competitors must submit the source code of their planners that will be run by the organizers on the actual competition domains/problems, unknown to the competitors until this time. This way no fine-tuning of the planners will be possible.
As in IPC 2018, we will use the container technology "Singularity" this year to promote reproducibility and help with compilation issues that have caused problems in the past. More details on Singularity can be found below.
The submission is done by email. A zip file should be sent to Chuan (chuanluosaber@gmail.com) and Mauro (m.vallati@hud.ac.uk), and structured as follows.
- The zip file must include the full source code of your planner, to be published on this web site after the competition.
- In the zip file, add a file called Singularity to the root directory of your repository. This file is used to bootstrap a singularity container and to run the planner. For examples and FAQs, please refer to the IPC 2018 website.
- Invoking the script with these three arguments should run your planner. You may assume that the planner is run from the directory in which it resides.
- The solution should be written to the result file in a format understood by VAL.
- Your planner will be run with limited user rights, but still please make sure that it doesn't contain any operations that can wreak havoc on the computer. In particular, it must not write to any directories outside the directory it is run in (creating and using subdirectories is fine), and it must not use the network.
- Don't hardcode absolute paths anywhere, and don't use symbolic links.
- The zip file should not contain any unnecessary files (editor backup files, .CVS or .svn directories, .DS_Store files, object files, bytecode, ...), but README files that may help with trouble-shooting the planner are appreciated.
- If your planner uses randomized algorithms, please initialize the random seed to a fixed constant. If there are any reasons to expect that your planner won't generate reproducible results, please tell us clearly by email.
If you are submitting more than one planner, please create a different zip file for each planner.
Bug fix policy
In some cases, we will offer the opportunity to fix bugs that arise during the evaluation period, but any changes after the submission deadline will be strictly limited to bugfixes only. We will use a diff tool to check that patches don't contain new features or parameter tuning, and will reject patches that don't look like minimal changes to fix bugs. It is your responsibility to provide patches that are easy to verify with a diff tool. We reserve the right to reject changes for which the only-bugfixes rule is unnecessarily hard to check (e.g. because you reformatted the whole code).
Organisers
- Chuan Luo <chuanluosaber@gmail.com> (LIACS, Leiden University, The Netherlands)
- Mauro Vallati <m.vallati@hud.ac.uk> (School of Computing and Engineering, University of Huddersfield, United Kingdom)
- Holger H. Hoos <hh@liacs.nl> (LIACS, Leiden University, The Netherlands)