back to main page

Frequently Asked Questions

Q: How do I get an account?


Accounts cannot be created automatically. If you need an account then write an email with your name and your requested account name to The account will be created timely by the organizers.

UPDATE: There are currently no ongoing competitions, and it is unclear whether there will be new ones in the future. Therefore we currently don't create new accounts.

Q: Which account data do I have to provide?


Participants have the possibility but are not required to provide their contact data, including full name, affiliation, and email address. It is possible to participate anonymously: the identity must be revealed only to the competition organizes, not to the public.

Only two pieces of data will be used when publishing the results after the competition: the "public name", which may be a freely chosen pseudonym, and the "method name" (if provided). If no public name is chosen then the account name will be used instead. Hence, the public name field allows users to override their account name for the purpose of publishing results.

Note that the names of the winners (all participants receiving a price) will be published by the conference hosting the competition.

Q: Why it is better to have my own account?


First, you can participate in the competition using your own account. Second, you may need it even for the trial track. Otherwise you may experience a problem when another participant used up all evaluations for a test problem from the trial track and you have to reset it all the time. Moreover, if the "history" option is switched on then the library will unsuccessfully try to recover evaluated solutions which it supposes are stored in log files on the local machine, while they are actually stored on someone else's hard disk. The error message in this case will be "history() failed: reading from log file failed."

In short, sharing a public account with the rest of the world is nice for a first impression, but won't work for serious software testing.

Q: I lost my password—what shall I do?


There is no password reset facility built into the system. If you need help with your password then write an email to, ideally from the same address used to create the account, and ask the organizers to reset your password.

Q: Which functions are inside the black box?


Nice attempt... but we won't tell. We won't even tell you how we selected the functions. At least as long as competitions are running.

We plan to offer access to past tracks out of competition for an extended period, e.g., for the unbiased evaluation of new methods. Therefore it makes sense to keep the box closed. We promise that you will get the functions eventually.

Update 09/2019: We have now published source code for evaluating the functions of all competition tracks! Check the downloads section of our website.

Q: What is the content of the trial and trialMO tracks?


The trial track consists of rotated Rastrigin functions. The trailMO track consists of shifted sphere-like functions, with a linear Pareto front in between the objective-wise optima. There is no special reason to specifically tune your algorithm on these problems and to link the performance on the trial tracks to the expected performance on the competition tracks. The trial tracks are really only for testing and debugging purposes.

Q: Doesn't the NFL theorem show that black box optimization is flawed?


In brief, the NFL (no free lunch) theorem states that under some assumptions on the selection of optimization problems all search algorithms perform the same. At first glance this statement seems to render the competition meaningless since on average all algorithms performs equally well. Provably. NFL implies that if we don't know anything about a problem instance then we have no way to make an informed decision of which search algorithm to apply. For each specific problem instance there are more and less suitable candidate algorithms, but we cannot know which one is best for that particular problem instance. Why then waste time on a black box optimization competition?

NFL contradicts our intuition and experience. Let's take that as a warning signal. In most cases humans and algorithms can solve problems more efficiently than exhaustive search and random guessing even if the problem structure does not give hints at how to proceed. This is because the preconditions of NFL are not fulfilled in practice. The NFL statement requires a uniform distribution of problems (refer to this paper [pdf] for a generalization), which puts an unrealistically high emphasis on random, unstructured, and hence meaningless problem instances. Only if the set of all structured problems is totally overwhelmed by random problem instances then a dumb method like random search can perform well, and NFL follows as a consequence.

Of course, real problems are not like that. Most real optimization problems have a lot of structure that can and should be exploited for their efficient solution (see Wikipedia for a detailed discussion on the implications of NFL). It is NOT the idea of black box optimization to solve problems without structure, but rather to perform well when structure is present but unknown. This requires robust performance of a search algorithm over a wide (yet relevant) range of structures since the particular type of regularity of the present problem instance is not known a priori. Interesting black box test problems are very different from random looking objective functions. They have regularities that can be explored and then exploited by optimization algorithms.

If the structure of a problem is well known a priori then it should be exploited. Period. In this case applying a general purpose "black box" search algorithm is wasteful, since it ignores the prior knowledge. However, many real world optimization problems are so complex that the available problem knowledge does not link to an optimization strategy. Then it is reasonable to treat the problem as a black box, which means that an optimization strategy with robust performance over a wide range of potential challenges should be applied. BBComp is a platform for comparing such optimization methods.

It is understood that the composition of the test problem suite is crucial. Some algorithms are good at exploiting certain problem regularities and bad at others. An over-representation of one problem type automatically induces a bias towards certain optimization algorithms.

An ideal, perfectly unbiased suite of test problems reflects the distribution of "real" black box problems, i.e., application problems for which we lack a good (enough) understanding of the problem structure. Ideally we would like the BBComp problems to match this distribution (being well aware of the fuzzy character of its working definition).

It is no surprise that we do not reach this goal. However, the test problems are designed with this goal in mind. They are heavily biased away from random problem instances (in the sense of NFL) to reflect properties found in problem classes which we consider to be "common" and "relevant for practical applications". In particular, the BBComp test problems exhibit a wide variety of different regularities. Other people would probably have chosen a different set of benchmark problems. Our choice is in no way distinguished among the possible choices. It is unavoidable to make a choice at some point, and it is nearly unavoidable that the choice of problems introduces an arbitrary bias into the performance results.

That being said, due to the way the problems are selected we are confident that the bias is not too large and the results of BBComp are meaningful and generalizable. The resulting ranking of algorithm will of course not carry over one-to-one to all problem classes. But for many relevant problem classes we expect a high correlation. It would be more than a bit surprising (although possible in principle) if BBComp low performers would all the sudden excel on a wide range of problems form a different test problem suite. It is in this sense that the performance comparison can give valuable indications to practitioners, and to researchers with an interest in the development of black box search algorithms.

Q: How are entries ranked?


The definition of performance depends on the setting. For a single objective, the performance is the best (lowest) function value reached within the predefined budget of evaluations. For multiple objectives, the performance is one minus the dominated hypervolume (to be minimized, equivalent to the maximization of the dominated hypervolume). The reference point for the hypervolume is the vector of all ones, and in the multi-objective case all objective values are in the range [0, 1]. In other words, the performance measure is the hypervolume of the non-dominated point in the unit hypercube. All participants are ranked on each problem based on their performance. After the end of the competition we publish the performance values and the corresponding ranks. An overall ranking is computed from the problem-wise ranks as follows. Each rank is associated with a numerical score, similar to the formula one scoring system where the best 10 ranks receive scores between 1 and 25 points and all others receive zero points. All scores for the different problems are added up and participants are ranked according to their sum total.

We use a similar system, however, it is adaptive to the number of participants. Let n denote the number of participants, each ranked from 1 (best) to n (worst). Rank k receives a score as follows. If k < n/2 then the score amounts to log((n+1)/2) - log(k), otherwise the score is zero. Or in compact form:

score(k) = max { 0, log((n+1)/2) - log(k) }

These values coincide (up to irrelevant scaling) with the rank-based weights used in the CMA-ES algorithm. In effect, these scores amplify differences of good (low) ranks and hide differences of bad (high) ranks k. This puts an emphasis on the top ranks.

This ranking system is, just like any other ranking system, arbitrary to some extent. The ranking system may change in the future.

Q: Do you provide performance data of baseline methods?


No, in contrast to other competitions we don't publish the performance of baseline algorithms like pure random search, (plain) CMA-ES, etc. This is because any type of feedback will give participants information that is not accessible in a black box setting.

The considered black-box scenario assumes that no baseline performance data is available. This is often the case in real world applications.

Of course, we may put the final results into context by providing (weak and strong) baselines. But these numbers will not be published before the competition is over.

Update 09/2019: Since the source code of the BBComp core is now published, it is easy to run baselines against it.

Q: Where can I get help?


We provide a rather complete reference documentation covering the client side interface of the competition software. If it does not answer your question then feel free to write to

Q: I cannot connect to the server—what shall I do?


Well, first make sure that your internet connection is working. You must be able to connect to port 39772 (or port 6881 with client software versions up to 1.0.5) of host Please check your firewall settings for outgoing connections. It is possible that the network of your organization has a firewall, please contact the administrator or/and try to connect again using a different network, e.g., your home network. If port 6881 is blocked then upgrading to version 1.0.6 (or higher) of the client software should solve the problem.

If your connection works fine and you cannot connect to the server, neither to this website (gotcha!) nor to the competition service, then the server may be offline or it may have crashed. In this case please drop a mail to so the organizers can take action. Very rarely the server is offline for maintenance work for very short time intervals of a minute or less. Therefore, please try at least two or three times before reporting a problem.

Q: Why do I get "the track is not open yet - please be patient"?


Tracks have an opening date. They are listed already before this date, so you know that your account has access to the track, but selecting a problem from this track before the official starting date will fail. There is no programmatic way to verify this condition through the API other than checking the error message string (which is discouraged since error messages may change in future versions). Please refer to the important dates section on main page for official opening and closing dates.

Q: I receive "setProblem() failed: failed to acquire lock - maybe another instance works on this problem?"


Each problem can be optimized only by one client instance at a time. The competition server makes sure that two clients running in parallel do not get to solve the same problem at the same time - which in most cases would be a total mess. If you receive this message because of an attempt to solve a problem with multiple clients at the same time then most probably the system has saved your day.

You may also receive this message when trying to recover, e.g., after a program crash or a network disconnect. It is possible that the server still keeps your previous optimization session alive, which holds a lock on the problem instance. Wait for 10 minutes and retry, then your old (inactive) session should have expired and the error message should disappear.

Q: How should I test my software?


Always test on the trial tracks! Never switch to the competition track unless you are double sure that everything is ready. Run multiple tests before declaring victory.

Simulate typical error conditions like a network failure, a power outage, and a program crash. E.g., manually interrupt your program. Test your recovery mechanisms.

If you need to tune parameters then use a benchmark suite of your choice, however, don't use the trial tracks. The problems in these tracks are atypical (on purpose), so you don't benefit from the possibility of running it as often as you wish.

It is probably a good idea to make sure that your implementation is correct. You can come pretty close to this goal by comparing its performance to published results on standard benchmarks. A facility for doing such a comparison is outside the scope of the competition software—it is understood that you alone are responsible for the correctness of your implementation.

Update 09/2019: In the source code of the BBComp core available for download, tracks are automatically reset for each session. Therefore, software testing is not a big issue when using historical tracks. However, note that the interface is slightly different (see text files inside the package).

Q: How do I reset the trial tracks correctly?


Log into the bbcomp website with your account, in the list of tracks at the bottom click "view" in the "progress" column for the trial track you want to reset. This will open another page with a detailed progress report on all problems. Scroll to the very end and click the reset button. Of course, this button is only present for trail tracks.

Note that this was only the first step. The second step is to delete your local log files for this track. These are named


where you have to plug in your account name and all IDs of problems you have optimized. The files may be in a non-standard location if you have told the library so with the configure API function. Check the documentation for details.

If you forget to delete the log files then the client will report the stored number of function evaluations to the server. The program will behave as if you did not reset the track, which may be irritating.

Q: My algorithm has crashed—what shall I do?


Don't Panic.

The same holds if your internet connection was interrupted, you experienced a power outage, or your program stopped for any other reason without finishing an optimization run.

If you have not actively disabled the client-side logging mechanism then all of your function evaluations have been safely stored to disk. Familiarize yourself with the history API function, see the documentation, and refer to the example clients for an example usage. This facility should make it easy to rerun any deterministic search algorithm to the point where it crashed and continue from there. Of course, if your algorithm is randomized then you'd better record the seed of the random number generator. In any case, all previous function evaluations are still available, so you do not have any disadvantage.

Q: I found a bug in my code, can I rerun my algorithm?


If you are still testing your software on a trial track then just reset the trial track.

If you have already moved on to an actual competition track then the answer is NO. Really. Please make double sure that your code works fine before switching to a competition track. Making a mistake on a competition track is irreversible, just like in real life where your budget is limited.

Q: How can I give feedback or suggest a feature?


We appreciate feedback and suggestions. Please write to

Q: Where can I find the source code of the competition software?


You can find the binary version of the dynamic link library as well as example source code for client programs in the downloads section. The source code of the library and the competition server is not available. It is not open source. We may decide to open up the software platform for other uses at some point in the future.

Update 09/2019: The core of the BBComp software was released as open source software. It allows to evaluate all objective functions used in our competitions between 2015 and 2019.

Q: I found a bug—what shall I do?


This is really bad, and at the same time we are glad you found it. Despite software testing there is no silver bullet against this happening. That's life (at least in a Turing complete world).

We'd be happy to fix the bug asap. To help us with that please do all of the following: