if you have to choose between a kernel and docker, just choose docker. Python can't get their shit together deployment-wise, and docker is the one true route (tm) to python deployment happiness.
forget virtualenv; forget package dependencies on conflicting versions of libxml; forget coworkers that have 3 different conflicting versions of requests scattered through various services, and goddamnit I just want to run a dev build; forget coworkers that scribble droppings all over the filesystem, and assume certain services will never coexist on the same box
Ha. Wait until you need to run a build of shared Perl codebase against unit tests in all of the dependent codebases... but some of those codebases compile and run C (or C++) programs... and some of those codebases depend on conflicting versions of GCC!
"If we hit the bullseye, the rest of the dominos will fall like a house of cards... checkmate!" -- Zap Brannigan
> forget coworkers that scribble droppings all over the filesystem, and assume certain services will never coexist
I think this tends to be less of a problem than the desire to have a build artifact that can be reliably deployed to multiple servers, rather than having the "build" process and "deploy" process hopelessly intertwined with each other.
forget virtualenv; forget package dependencies on conflicting versions of libxml; forget coworkers that have 3 different conflicting versions of requests scattered through various services, and goddamnit I just want to run a dev build; forget coworkers that scribble droppings all over the filesystem, and assume certain services will never coexist on the same box
just use docker. It's going to go like this:
step 1: docker
step 2: happy