Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

> The worse the developer, the more tests he'll write.

As always, generalization is the tool of the fool (sorry for the fool part, but it rhymes ;) ). Writing pointless stubs / mocks and testing execution order of statements is definitely a bad pattern, writing many and good functional, e2e and integration tests however is not.



100% agree. Full code coverage with acceptance and integration tests is really good. If you can get a test case for every feature that your app is supposed to have you can develop quickly and ship often knowing that nothing will break. Extensive unit tests that don't test what features do but test how features are implemented are usually a waste of time in my opinion, unless there is some complicated and critical algorithm that you need more visibility and protection for. Otherwise the only purpose of those unit tests is to break for good refactors and double dev time.

When You are cleaning up tech debt and make hundreds of lines of changes and no acceptance tests break (because everything is still working as it should) but 50 unit tests break... I've wasted so much of my life


On top of that this statement will lead (some) new developers to just say "oh yeah I don't need tests because the gosus write only a few tests as well". Same pattern can be observed in DOTA or other games where newbies say "oh I don't need wards/intel because pro players don't need it either". The difference is that pro players already know what's going on.

Also worth a read while we're at it: https://www.linkedin.com/pulse/20140623171038-114018241-test...


Generalization is at the heart of science. The lack of generalization is one of the most frustrating attacks you can launch on a scientist's empiricism.


We are talking about software testing here.

To a first approximation, nobody is being empirical, and there is no science being done.


When you are writing tests, that's an empirical, rather than a theoretical approach to software correctness.

When programmers change a factor to "see what breaks", that is very much an empirical activity, and it is part of the programmer's theory-building of a phenomena.

If a young child takes a gear from a watch and observes its breaking, that is very much an empirical activity. It is also the beginning of theory-building.

You don't need MANOVA to engage in empiricism.


I should have been more clear, I see now my wording was ambiguous.

Approximately nobody is being empirical about what works well and does not in testing. Use this approach vs. that approach, do this, no - do this instead. It's all largely heuristic.

Individuals performing testing and debugging are usually at least most empirical, I agree (although occasionally the rubber chickens come out)




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: