back to main page

Results of the GECCO'2018 1-OBJ expensive Track

"Raw" Result Data

On each problem participants were judged by the best (lowest) function value achieved within the given budget of function evaluations. There were 13 participants in the field. The best function value per problem and participant (1000 times 13 double precision numbers) is listed in this text file.

Participant Ranking

Participants were ranked based on aggregated problem-wise ranks (details here and here). The following results table lists participants with overall scores (higher is better) and the sum of ranks over all problems (lower is better) The table can be sorted w.r.t. these criteria.

rank participant method name method description software paper score  sum of ranks 
1 Nbelkhir gaussian processes + Local search Robust optimization for finding initial search point + multiple local search 815.239 4966
2 avaneev biteopt2017 link 793.263 4144
3 Artelys 685.836 5631
4 LTM@TUK & Moving@UNIFI 509.01 5745
5 LocalSolver LocalSolver 8.0 LocalSolverBlackbox 8.0 with default parameters 426.768 6873
6 Jeremy Research algorithm 413.758 5827
7 Al Jimenez 405.932 6171
8 mini-mlog GAPSO link link 383.728 6052
9 anonymous 240.166 6972
10 GERAD PSD-MADS Serial version of PSD-MADS, based on MADS and using the NOMAD software. link link 218.277 8007
11 jpsbook 134.86 8161
12 kadiri 41.4288 11330
13 Simon Wessing ALGSS Approximately latinized generalized stratified sampling link link 28.588 11117

Visualization of Performance Data

The following figure shows an aggregated view on the performance data.

The following figures show the same data, but separately for each problem dimension.