It’s probably been 15 years since I looked in detail at a formal treatment of statistical mechanics, so maybe this is well-trodden territory, but... has anyone formalized the idea that the second law can be interpreted as a general inability of computer programs to predict one another? I’m also channeling Wolfram here, but I’m not sure if he ever expressed it exactly this way. But the thing I’m imagining would be: set up some (preferably discrete) dynamical system, and then another different kind of system that you fit variationally to the first. The claim would be that the mutual information between state variables of the second and the first will always gradually decrease — unless they are the exactly the same family, in which case the fit rediscovers the exact dynamics and the two systems are identical. So intuitively, the second law would correspond to the claim that different families of programs cannot accurately simulate each other for very long. In the special case of a coarse graining, this should give you the more familiar story about entropy and the ‘gap’ between macrostates and microstates, but the real story is a more general one than that and is an empirical fact about the “computational universe”, with physics a special case.
>has anyone formalized the idea that the second law can be interpreted as a general inability of computer programs to predict one another?
Arieh Ben Naim thinks entropy is better described by Shannon's Measure of Information. I haven't actually read any of his books, but I've been meaning to "one of these days".
Thanks, I do appreciate you extracting this. I did follow one or two of these originally. But I'm not going to buy an ebook on Amazon to figure out if he has anything useful to say.