Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

In general, I think neural networks really struggle with np hard problems. Current state of the art is bottlenecked wrt to even subtle generalization. The strong baselines have been variations on the pointer-network architecture for some time now. Even though they can do reasonably well on restricted problems (e.g. instances of sat/traveling salesman with fixed # of nodes), they struggle to generalize to instances of the problem with variable # of nodes - not to mention varying the problem itself. Some popular approaches also incl. applications of Graph CNNs & reinforcement learning. Don't really have much more than a cursory idea of the current directions though.

To me, more exciting is some kind of joint collaboration between modern ml systems & formal solvers.



You might be interested in the line of work around differentiable relaxations of formal solvers that was kicked off by Zico Kolter et al. They add a Max=SAT solver as a differentiable layer. This allows the neat trick of solving handwritten Sudoku puzzles by composing a ConvNet with the differentiable Max-SAT layer.

[1] https://arxiv.org/abs/1905.12149


Thanks - that's great! I'm familiar with Kolter's work on adversarial robustness.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: