Indeed, that's the main component of the gold-standard results replication. For a chemistry experiment, for example, you're not supposed to replicate it by going to the original lab, using their existing apparatus, and just re-running the experiment. Instead, there's stronger confidence in the results if you replicate it using your own equipment in your own lab, reconstructing any necessary components from descriptions in the paper. That way you know that the results were actually due to what the paper claimed they were, instead of some overlooked happenstance in the original lab or apparatus.
Of course, reimplementation can be quite time consuming, which is the main problem. But then sharing code can actually decrease the likelihood of anyone ever reimplementing the algorithm again, instead just re-using the same (possibly buggy) code forever without looking at it.
OK, but there's no way subtle bugs will be found unless the code is released. If you replicate a non-trivial piece of code, both will have bugs, and both will have subtle design trade-offs (which won't be documented in the paper). Will you publish your results, despite the fact that they don't agree with the existing, accepted ones? If you do, what will it achieve?
Other people can validate you didn't make any blatant errors.