yess. loads of space for further exploration here. there is an attempt to keep things as general as possible in the expert.md file, but hard to mitigate overfitting fully. however, changing the seed will not get you much further with all else in the solver constant. unless you try a number of seed that exponentially scales with the size of the problem
sure. in the limit, everything is parameter tuning. with large enough NP-hard problems, the complexity of the search space is big enough that its infeasible to get to a better state by just tuning params in any reasonable amount of time.
I beg to disagree. Integer programming solvers have improved orders of magnitude in the past 20 years. The basic algorithm (branch and bound) is the same.
The big commercial solvers basically are very good at picking up structures and selecting the tuning parameters that work better for specific problem types.
I guess my point was that I don't see many algo changes in the commit history, which is a shame if this has been lost; library/* files are largely unchanged from the initial commits. But each time the agent runs, it has access to the best solutions found so far and can start from there, often using randomisation, which the agent claims helps it escape local minima e.g. 'simulated annealing as a universal improver'. It would be nice to see how its learnt knowledge performs when applied to unseen problems in a restricted timeframe.
as its from 2024 (MaxSAT was not held in 2025), its quite likely all the solvers are in the training data. so the interesting part here is the instances for which we actually got better costs that what is currently known (in the best-cost.csv) file.
So the reason the comment appears weirdly disconnected from the content of the article is that it was generated independently from the content of the article.
reply