I've spent years writing in 0-origin languages and years writing in 1-origin languages. I do not agree with your assertion that 0-origin is more error-prone; quite the contrary. If you're manipulating sequences using indexing, there are more cases in a 1-origin system where you have to remember to add or subtract 1; a 0-origin system has fewer of these, and if you learn good habits, you can get rid of almost all of them.
(And for the record, I started out in the 1-origin Basic and Fortran world; my preference isn't just a matter of which one I learned first.)
I started out with 0-origin and for pure programming I agree. The problem is that real-world indices almost invariably start at 1 and that is how everyone who isn't a programmer counts, so automating anything based on a real-world process requires a translation. Conversely, when testing and debugging against real-world stuff everything is going to be off by one. It's just asking for trouble.
(And for the record, I started out in the 1-origin Basic and Fortran world; my preference isn't just a matter of which one I learned first.)