I posted this in part because it explains why I think attempts to regulate AI (including the recently-proposed moratorium) have a decent chance of working. They won't prevent dangerous misuse of AI technology altogether, but they can substantially reduce the risks, much like with bioweapons.
The incentives are different. Bioweapons are mainly destructive and not useful for achieving any aims that are sane (they don't secure energy, they don't increase local prosperity, they don't win land wars, they don't target political opponents, etc). Anyone who defects from banning them gains a target on their back and that is it. I can't think of a use for bioweapons.
Contrast to AI. Literally anyone who gets involved, from a teen upwards gains magical new powers across a range of arts and sciences as well as gaining new insights into any questions they have. There is a vague promise of utopia where it no longer makes sense for humans to work (probably won't work out that way, but whatever). Defectors gain massive advantages and can also maintain plausible deniability.
I can't speak for parent, but there is a point at which I just stop caring. Either thinga will get bad enough and people will revolt like they tend to do when its too late or they won't. Nihilism is one hell of a drug.
Separately, on a personal level, I find social justice annoying.
But then I have coffee and play with my kid and no longer wish to destroy the world.
I don't know how much "hidden" knowledge there is in AI research. In biotech stuff, those can make or break an experiment easily and there is just so many of them and they are all trivial when known but absolutely critical. Some of them actually has zero reason to exist but they still exist, similar to a block of legacy code that was commented as "do not remove, shit breaks when removed". Nobody knows why and everybody just follow it religiously.
So the same approach may or may not work for AI. Forcing stuff to go underground in AI research may not hinder them that much...
The issue is that most "underground" groups do not have access to top researchers or experienced engineers; those tend to be public figures, or people who want to become public figures. Furthermore, complete secrecy requires tightly restricting communication in a way that makes collaboration even within an organization harder; note how terrorist groups have to operate in isolated "cells."
I'm sure that anyone could replicate GPT-4, but creating radical new advances would be very hard to do in secret.
Furthermore, while many such groups (particularly state actors) have plenty of funding, for other groups it will take a great deal of time and effort to raise or steal the necessary cash in secret, potentially making the costs outweigh the the benefits. If you make a profit, you'll also need to launder the resulting profits, which reasonable crypto regulations would make harder.
Ultimately, nobody can entirely stop "underground" AI. If you think AI will bring about the singularity and destroy mankind, laws cannot stop it. But if your worries are more pedestrian, merely reducing the prevalence of such systems is more than enough.