It might simply be a matter of comfort. We don't know how human cognition works, but we have a ton of practical experience with it, so we feel like we have a pretty good handle on why people make the decisions that they do, and how to train and shape those decisions. As a specific example, a human doesn't typically trigger their own death/destruction without a significant reason that can be expressed and potentially detected.
We know a lot more about how machine learning works--since we developed it ourselves recently--but because it is so new, we feel the need for deeper understanding to mitigate risks. To go back to the suicide example, a computer-driven system will destroy itself immediately for all sorts of reasons, including incredibly trivial errors.
We know a lot more about how machine learning works--since we developed it ourselves recently--but because it is so new, we feel the need for deeper understanding to mitigate risks. To go back to the suicide example, a computer-driven system will destroy itself immediately for all sorts of reasons, including incredibly trivial errors.