I concur with the Nutch vote; but more specifically, take a look at the crawler code written in the src trunk for use with Hadoop. That is probably a good place to start. Also worth a look is Heritrix (crawler for archive.org). http://sourceforge.net/projects/archive-crawler
Sadly, this too is written in Java.
Edit2: Polybot is another Python based crawler, but no code. However, the paper has some interesting ideas:
Design and Implementation of a High-Performance Distributed Web Crawler. V. Shkapenyuk and T. Suel. IEEE International Conference on Data Engineering, February 2002. http://cis.poly.edu/westlab/polybot/
Thanks. Still would like to use Python to be honest (any python suggestions?), but I'll give this a go. Going to do some more research and might post back findings if anyone would be interested in critiquing them. I'm creating this startup from scratch so if there is anyone interested in the crawler side of things I'd be happy to chat either about collaboration or sharing ideas.