I wonder why I haven't heard of Kontena. It looks pretty good. Currently running a Kubernetes cluster at work. It's great in many ways but I wish it had tighter integration with things like Amazon ALB/Route53/etc. Maybe Kontena will beat it to the punch on some of that?
"However, [Kubernetes] does not include overlay networking, DNS, load balancing, aggregated logging, VPN access or private image repositories"
In fact, it has had all but the last two features built-in and configured by default since I've started using Kubernetes (version 1.3.) And while having an integrated private registry is great, it feels like the usefulness is a bit overstated when configuring one inside of a container environment is already incredibly easy.
The FAQ is arguably correct. Kubernetes doesn't come with overlay networking. It can work with different clouds, but if you're on, say, DigitalOcean, you will have to run Flannel or Calico or something similar.
Kubernetes doesn't have DNS built in, but it comes with an add-on you can install, KubeDNS, which allows internal DNS queries to resolve service names.
Kubernetes doesn't aggregate logs, either. Kubelet knows how to get at Docker logs, but they're still just sitting in /var/lib/containers somewhere. To aggregate them, you need to run Fluent or Logspout or similar. If you run Kubernetes via Google Container Engine, Fluent set up for you (though it will collect everything into StackDriver Logging, which is not great) through a daemonset, but it's not built in.
Kubernetes doesn't really have load balancing. You can have it allocate a cloud LB if you're on GCP or AWS, or you can set up your own Nginx ingress controller, or some other LB such as Traefik, which integrates nicely with K8s. For services you get a poor man's load balancing through iptables and kube-proxy, but it's purely round-robin.
Well, you would be correct to say that Kube-DNS is a "cluster add-on." However, something being a cluster add-on doesn't make it not part of Kubernetes. It is no less a part of Kubernetes than kube-proxy. As far as I can tell, the difference is that cluster add-ons run _within_ the cluster, alongside other services. With no cherry picking, here is a quick excerpt from the Kubernetes README about cluster add-ons:
> Cluster add-ons are resources like Services and Deployments (with pods) that are shipped with the Kubernetes binaries and are considered an inherent part of the Kubernetes clusters.
I do not think there's a standard set of cluster add-ons that are guaranteed to be installed in any given setup, but thus far I've done a few Kubernetes deployments and it seems the following come by default anywhere:
- fluentd for logging (w/ StackDriver on GCE and ELK elsewhere)
- kube-dns for dns
- grafana + heapster for container metrics
- dashboard (of course)
Re: load balancing. Calling Kubernetes Services poor man's load balancing seems rather disingenuous. Sure, it's layer 4 load balancing and it is round robin. However, being lower level does not make it useless. In fact, I have some production deployments that are not HTTP. Kubernetes does have a primitive for layer 7 load balancing and routing for HTTP, in the Ingress resource, which admittedly is not complete yet, but I'm grateful that they support layer 4 natively. It does just fine with our production deployments even though it's "purely round robin." Most of the time, pods under the same service will have roughly equal capacity containers, so round robin is usually fine. And as far as "poor man's load balancing" goes, it provides liveness and readiness checking, something that I'd consider to be useful alone, versus say DNS-based load balancing.
I think in some ways your line of thinking punishes Kubernetes for having a somewhat modular architecture and that sucks. If you have a setup that actually does lack ELK or Kube DNS by default with modern versions, then sorry for misjudging. However, I have not seen this yet and haven't seen anyone use Kubernetes without Kube DNS since I started.
I just made a switch from Kube to Kontena last week. I had constant trouble with etcd cluster and so far Kontena seems solid and was much faster to set up for production. I'm running 6-8 nodes so kube always felt like it was a little too much for me.
I noticed the same thing about FAQ, I had all those working in my kubernetes setup.
I had no problems setting up or managing Kubernetes though I share the sentiment that Kubernetes feels like a little overkill.
One problem I did have with Kubernetes was upgrading. In the 1.3 era, I used kube-up to spin up an AWS cluster. There was no upgrade path until 1.5 when Kops began supporting importing kube-up clusters. It almost worked automatically, but it somehow got the wrong setting for one of the subnet configurations. Once I fixed that though, it worked. I gotta admit I was pretty impressed.
The only other problem I can think of is security, everything inside Kubernetes currently defaults to having full API permissions. Obviously this is insane pants-on-head behavior.
Security in general is a bit lacking in k8s, though with RBAC in 1.6 and encrypted Secrets in 1.7, it seems they are working hard to make things better.
In GKE I just use one cluster per permission domain, which is fine for one team (and 2-3 permission domains), but obviously for large orgs would be a massive headache.
One complaint: Kontena's FAQ needs some updating. https://www.kontena.io/docs/faq.html
"However, [Kubernetes] does not include overlay networking, DNS, load balancing, aggregated logging, VPN access or private image repositories"
In fact, it has had all but the last two features built-in and configured by default since I've started using Kubernetes (version 1.3.) And while having an integrated private registry is great, it feels like the usefulness is a bit overstated when configuring one inside of a container environment is already incredibly easy.