Probably a combination of the above. They are also writing tests for each successfully crushed bug:
Whenever a bug is reported against SQLite, that bug is not considered fixed until new test cases have been added to the TCL test suite which would exhibit the bug in an unpatched version of SQLite. Over the years, this has resulted in thousands and thousands of new tests being added to the TCL test suite. These regression tests ensure that bugs that have been fixed in the past are not reintroduced into future versions of SQLite.
This is precisely how regression tests are supposed to be built; to wit:
1. Find/identify/isolate the bug;
2. Create a test that fails if the bug is not fixed;
3. Run the test to make sure it detects the bug;
4. Fix the bug;
5. Run the test to make sure it passes now that the bug is fixed;
6. Add the test to the test suite and checkin/push the bugfix.
I don't always get to do this on my projects, but it's a good habit to get into. Having a good/easy to use test framework already setup can help a lot with this (if tests are hard to write/take too much time, they won't be written).
3. Create a test that fails if the bug is not fixed;
4. Run the test to make sure it passes now that the bug is fixed;
5. Revert the fix and run the test to make sure it detects the bug;
6. Add the test to the test suite and checkin/push the bugfix.
I prefer this order because if I make a mistake in step 1 I usually realize it in step 2, where I feel like you might not realize it until step 4. I'd be curious to know if there are advantages you know of to the order in which you do it.
I work in the same order as npsimons. It's a similar philosophy to the red-green-refactor strategy (http://bit.ly/OazqR8).
The advantage, to me, is that you focus on the behavior of the application rather than the code. I have a tendency to get off-track when I'm coding and I'll start on refactors that, in hindsight, were a terrible idea.
By forcing myself to be sure that the feature/bug is something I really want, I stay on-track because that damned test keeps failing and I just want to make it go green! By writing the test beforehand, I can be sure that it's what I really want and not just what's easy to code.
That said, both ways work and I sometimes switch to an approach like yours.
The advantage, to me, is that you focus on the behavior of the application rather than the code. I have a tendency to get off-track when I'm coding and I'll start on refactors that, in hindsight, were a terrible idea.
Very much the same for me; I'm much like Lenny from "Memento" at times ("now what was I doing?"); add to this that reproducing the bug is essentially what you are doing by writing a regression test for it. Also it falls in line with TDD as applied to maintenance (keep coding until all the tests pass). One last thing: it's kind of a wash with VC these days (and reverting, as the GP said), but if you fix the bug first, are you certain your test is catching it? I like to have a piece of code that I can say "yes, when I do this, my code fails; now to fix it."
Whenever a bug is reported against SQLite, that bug is not considered fixed until new test cases have been added to the TCL test suite which would exhibit the bug in an unpatched version of SQLite. Over the years, this has resulted in thousands and thousands of new tests being added to the TCL test suite. These regression tests ensure that bugs that have been fixed in the past are not reintroduced into future versions of SQLite.