My argument here would be that this isn't that new, all told? And... I would be shocked to find a lot of disagreement there. Indeed, other threads are already pointing out that terms already existed that covered this general idea.
My problem with antifragile, as often offered, is that it is positioned as something that gets stronger from being damaged, full stop. But... there is literally nothing on earth that would withstand the sun going nova, so that there are obvious limits to the idea. And if you accept that it is something in limits, you are back to model ideas of feedback and growth. And as you get back to that, you cover a lot of the same ground as many other discussions.
It is a cute model, mind. And somewhat fun to play with. Also worth knowing that some systems will react violently to small changes. Think flashbacks in building fires. It just doesn't bring much new to the table, all told.
Edit: I meant to add "fair enough!" at the top of this. Is a valid point to make! :D
I guess it's important that the negative signal be relatively small compared to the affected system, like antivenom doses, but also it needs to be predictable/differentiable to some degree. If two different doses of venom never had the same molecular patterns then it would be impossible to immunize against it.
A one-time supernova event would destroy a solar system, but the universe itself is antifragile to supernovas, and actually has built emergent spatiotemporal structures which depend on supernovas as part of their lifecycle. New solar systems rise up from old supernovas.
Predictability/differentiability of internal and external states is key for building robust systems, negative signals alone are often just detrimental. You can't learn from chaos.
Right. If you have systems that can adapt to stimulus in ways to prepare for recurring stimulus, you can look like this antifragile system. If the answer is to just be bigger... It is not a compelling model.
So building your immunity by using low doses doesn't mean you are antifragile. It means you are trainable. And even then, you are within tolerance levels. And cannot ever withstand some poisons.
Indeed, training is largely rehearsing some specific stimulus to be ready for it in a non training environment. We don't think of athletes as antifragile, though. Do we?
I don't think it's very common to experience antifragility outside of abstract complex systems such as economic markets. My intuition is that physical systems which exhibit antifragility tend to be metasystems, and the antifragility is described in the context of achieving specific states of subsystems.
For example, with the universe in relation to supernovas, we can say that a supernova wipes out a solar system. But new solar systems then take their place, with a different metal distribution that may be more conducive to advanced intelligence. If advanced intelligence is the goal, we might say that the universe is antifragile to supernovas, or more generally to the effects of gravity.
However, it's definitely a matter of perspective given that supernovas themselves are entirely inconsequential to the universe as a whole, and are only relevant to subsystems at specific emergent layers within the universe.
We could also form similar conclusions about natural evolution in general.
We might even be able to generalize further and posit that there is a natural tendency for order to arise out of chaos, and so order itself is antifragile to chaos, given that order thrives under the right chaotic conditions[0] and isn't simply immunized against it. So antifragility may exist as a fundamental property of the foundation of stuff, but it's not automatically an inheritable property of subsystems.
It all becomes very abstract, but at least we can define some constraints, namely that antifragility should be described in terms of optimization goals and measurement of specific states, and are thus subjective to the observer and not an innate property of the universe or any other system.
My intuition is that antifragile is often mistaken for controlled/directed growth. In that it works well in examples where things are able to die off and to be regrown. Muscles, in that regard, are able to withstand tears and are able to grow back. They will do so largely where they were torn.
There are limits. It's not about arbitrary negative stmuli and the entire idea only applies to certain kinds of systems. These criteria are covered in Taleb's books. The limits are well known in biology.
My gut is my main gripe is with many of the examples used. They often ignore that they are all on things that are in a healthy state of growth and that what is largely happening is that growth is refocused based on stimuli. More, there is no guarantee that you can refocus growth based on feedback in a way that will be ideal for future stimuli.
Which, I confess is silly for me to really get upset about. Luckily, I'm confident I'm sounding far more upset on this message board than I am on it. :D