Haha, I'll defer to you then. But I'm still curious about this:
> you have to support arbitrary compiled code, with all the intentional or unintentional memory safety violations
But there are multiple barriers here, are there not? First, your code needs to be signed (and notarized, at some point), and then at runtime the app sandbox/entitlements/process separation ensures there's not much you can do even if you're running arbitrary code. Sure, the first "barrier" isn't necessarily all that restrictive or thorough, but the other one should ideally stop anything horrible?
It's not so much a problem of signed vs. unsigned code.
There's the "are you malware that's got through whatever rules exist in the target App Store", but there's also the "is the extension code buggy".
In my experience the latter is the real killer - there are all sorts of things an App Store review process can be used to prevent malware, but they're all dependent on the code doing what static analysis claims it's doing - they can't protect against terrible code -- my experience writing software has taught me that no one rights 100% correct code, 100% time.
Historically GPU drivers have been a terrible source of bugs (it's why WebGL has such tight restrictions on features and shader syntax) not because of intentional malfeasance, but because you're processing arbitrary untrusted content in a trusted environment (my understanding/belief is that the windows driver model has pushed this out of the kernel? please confirm/deny), so functionally forcing that into a restricted execution environment is a win.
Obviously the memory model of WASM still allows for things like use after free, out of bounds access, etc; but at least it's not unbound into the host address space.
I guess you're assuming that the platform/runtime that's doing the isolation is close to being correct, which we both know isn't quite true ;) The only benefit I see is that you're adding an additional level of abstraction (namely, you're executing a vetted selection of native code rather than arbitrary native code), which makes reaching the point where you can actually break things harder. Was this the point you were trying to make?
The idea is to reduce the ability of the untrusted code to go wrong and compromise the host.
Going wrong can mean “exploited by malware” through to “extension code trawls the host process address space to provide ‘features’”
The latter of these two used to happen all the time with “haxies” on OS X.
The run on security benefit of providing a semi-virtualised environment for third party/untrusted code is that if the VM is exploited you are able to fix and ship a fix for the VM. You can’t fix the untrusted code.
Haha, I'll defer to you then. But I'm still curious about this:
> you have to support arbitrary compiled code, with all the intentional or unintentional memory safety violations
But there are multiple barriers here, are there not? First, your code needs to be signed (and notarized, at some point), and then at runtime the app sandbox/entitlements/process separation ensures there's not much you can do even if you're running arbitrary code. Sure, the first "barrier" isn't necessarily all that restrictive or thorough, but the other one should ideally stop anything horrible?