That's not the goal of a VM/runtime, though: it only ensures that the hosted code cannot escape the sandbox, and provides a correct implementation of the semantics of each VM opcode. The hosted code's computation implemented with those opcodes is a black box to the VM.
Consider: should the VM somehow try to analyze the running code and determine when it's about to commit a logical error and return "forbidden" data? What specification states which data is forbidden?
Consider: even the most high-level VM can execute a program with logical errors if it is Turing-complete. A Java or Erlang program running on a bug-free, fully-compliant implementation of the respective VM can still get (e.g.) an authorization condition wrong, or incorrectly implement a security protocol, or return data from the wrong (in-bounds, but wrong!) index in an array.
Indeed, what WASM advocates cannot do is advocate safety, that it is better than any other bytecode format since 1958, when it isn't so.
Secondly, it could have been designed with bounds checking for any kind of data access.
The CLR is honest about it, when C++/CLI uses C style low level tricks, the Assembly is considered tainted and requires explicit allowance of unsafe assemblies execution.
An idea that goes back to Burroughs B5000, where tainted binaries (code using unsafe code blocks) requires admin permission for execution.
Consider: should the VM somehow try to analyze the running code and determine when it's about to commit a logical error and return "forbidden" data? What specification states which data is forbidden?
Consider: even the most high-level VM can execute a program with logical errors if it is Turing-complete. A Java or Erlang program running on a bug-free, fully-compliant implementation of the respective VM can still get (e.g.) an authorization condition wrong, or incorrectly implement a security protocol, or return data from the wrong (in-bounds, but wrong!) index in an array.