At the latest conference of the German OR Society in Hamburg (www.or2016.de), I gave a semi-plenary talk on the history of metaheuristics. This talk is the live-version of a chapter that will soon appear in the Handbook of Heuristics (Springer). This chapter was co-authored by Marc Sevaux, and by the *eminence grise* of the entire field of metaheuristics, Fred Glover.

In the talk, I presented a statement I jokingly referred to as “Sörensen’s conjecture”. This “conjecture” reads as follows: “*In the real world, solving optimization problems using exact methods is a waste of resources.*” That is, of course, a very strong statement that perhaps deserves some clarification.

The statement could be misunderstood to mean something along the lines of “all use exact methods is a waste of time, therefore exact methods are essentially useless”. That would be a complete misinterpretation of Sörensen’s conjecture.

The conjecture is found on a slide which talks about the fact that our brain has the most fantastically versatile problem-solving engine ever seen. Our brain is able to solve virtually any optimization problem thrown at it. Walk up to a random person in the street and ask them to solve a mixed-fleet vehicle routing problem with soft time windows and witness what happens: as soon as the person understands the problem, he/she will come up with a strategy to solve it and deliver a solution in a few minutes. That solution may be very far from optimal, but it is a solution nonetheless.Apparently, solving optimization problems has been so important for our survival that we have evolved the machinery to solve basically any optimization problem. However, and this is crucial: our brain *never solves a problem exactly*. Our brain has therefore a 100% pure heuristic solver. There is no linear programming in our brain, no branch-and-bound, no cutting planes.

In the talk, I gave the example of a caveman hunting a mammoth and trying to hit the mammoth in the eye with a spear. I’m not a mammoth-hunting expert, so I have no idea whether that is actually a good mammoth-killing strategy. It is, however, an optimization problem: given the information that my senses supply me with, what is the best possible way for me to throw the spear so as to minimize the difference between the mammoth’s eye and the location my spear ends up after I throw it. Now, there are two ways to solve this problem: exactly or heuristically. In theory, it is not impossible that my brain would have developed an optimization engine that would take all of the available information and would use that information to derive *exactly, i.e., using an exact method *the required muscle tensions for me to throw the spear as close as possible to the mammoth’s eye (I might still miss, but that is not the point).

Of course, our brain does nothing like that, and it has every reason not to. What we know about computational complexity tells us that (1) by the time our brain would have calculated this exact solution, the mammoth would be long gone – not because it would have run away but because mammoths would be extinct; and (2) calculating that solution would require far more energy than an entire mammoth worth of calories would yield.

*That *is what Sörensen’s conjecture is about. Evolution by natural selection has endowed us with a fantastic *heuristic* (one could even say meta-heuristic, but that’s another story) problem-solver. Natural evolution has, for very good reasons, favored a “good solution in a short amount of time” over “the optimal solution, no matter how long it takes”. To drive the point home: if our brain would be an exact solver, we would all be dead.

A similar reasoning can be made for the many real-world optimization problems that surround us. In many areas such as production and transportation planning, personnel scheduling, and many more, the optimization problems that naturally arise are large, complex, often stochastic or dynamic, and there is every reason to solve them using heuristics. Of course, there are situations in real life where the use of exact methods is warranted. If you are solving a shortest path problem, a minimum spanning tree problem, a linear programming problem, or any other of the many special cases for which efficient exact methods exist: by all means use an exact method. If you face a mixed-fleet vehicle routing problem with time windows, however, your customers will much prefer you to use a heuristic lest they wait for their deliveries indefinitely.

No doubt, there is a tremendous amount we can learn from exact methods about the solution of an optimization problem, even if we decide to solve it using a heuristic in the end.

And, of course, in many cases you can stop an exact method *before *it finds the optimal solution if you want a good solution quickly. However, you are then using a method that does not guarantee the optimality of the solutions it produces. And such a method is … a heuristic.

- K. Sörensen, M. Sevaux, and F. Glover, “A history of metaheuristics,” in OR2016: annual international conference of the german operations research society, Hamburg, Germany, 2016.

[Bibtex]`@inproceedings{sorensen2016history_pres, author = {Sörensen, Kenneth and Sevaux, Marc and Glover, Fred}, title = {A history of metaheuristics}, booktitle = {{OR2016}: Annual International Conference of the German Operations Research Society}, year = {2016}, address = {Hamburg, Germany}, }`