back to main page

Results of the GECCO'2019 1-OBJ expensive Track

"Raw" Result Data

On each problem participants were judged by the best (lowest) function value achieved within the given budget of function evaluations. There were 9 participants in the field. The best function value per problem and participant (1000 times 9 double precision numbers) is listed in this text file.

Participant Ranking

Participants were ranked based on aggregated problem-wise ranks (details here and here). The following results table lists participants with overall scores (higher is better) and the sum of ranks over all problems (lower is better) The table can be sorted w.r.t. these criteria.

rank participant method name method description software paper score  sum of ranks 
1 Artelys 934.769 2579
2 Nacim Belkhir 732.135 3222
3 avaneev biteopt2018 link link 585.603 3361
4 mini-mlog GAPSO A hybrid PSO+DE+Square function model+Polynomial function model with adapted behavior pool and adapted reset behaviors link 381.063 4214
5 GERAD MADS Mesh Adaptive Direct Search algorithm Implementation: NOMAD solver Version: 4.0 (alpha, not available yet) link link 272.365 4726
6 V-Stanovov LSHADE-RSP 193.918 4972
7 Jeremy PSO variant 97.7631 5921
8 coco sorry buggy code 61.1242 7320
9 Raphael Patrick Prager 1.65026 8979

Visualization of Performance Data

The following figure shows an aggregated view on the performance data.

The following figures show the same data, but separately for each problem dimension.