That's not great, but it's better than the opposite. I've been in networking for ages and have observed that most networking people will, as a rule, make networks as complicated as possible.
Why have one layer of NAT when you can have four or five? Why not invent a bespoke addressing scheme? Why not cargo cult the documentation and/or scripts and config files from Stack Overflow or ChatGPT?
Under-engineering is an easier problem to solve than over-engineering.
I took an "Architecting on AWS" class and half of the content was how to replicate complicated physical networking architectures on AWS's software-defined network: layers of VPCs, VPC peering, gateways, NATs, and impossible-to-debug firewall rules. AWS knows their customers tho. Without this, a lot of network engineers would block migrations from on-prem to AWS.
Ages ago I deployed a sophos virtual appliance in AWS, so I could centrally enforce some basic firewall rules, in a way that my management could understand. There was only 1 server behind it, the same thing could have been achieved simply using the standard built in security rules. I think about it often.
I do find Azures implementation of this stuff pretty baffling. Just in, networking concepts being digested by software engineers, and then regurgitated into a hierarchy that makes sense to them. Not impermeable, just weird.
I had a very interesting conversation with an AWS guy about how hard they tried to make sure things like Wireshark worked the same inside AWS, because they had some much pushback from network engineers that expected their jobs to be exactly the same inside as on-prem.
Main source of issues leading to overcomplex networking that I ever seen was "every VPC gets a 10./8" like approach replicated, so suddenly you have complex time trying to interconnect the networks later.
I think this part is somewhat legitimate. Every network engineer knows "it's always DNS," to the point that there are jokes about it. DNS is a brittle and inflexible protocol that works well when it's working, but unfortunately network engineers are the ones who get called when it's not.
A superior alternative to DNS would help a lot, but getting adoption for something at that level of the stack would be very hard.
I find that a lot of "it's always DNS" falls down to "I don't know routing beyond default gateway" and "I never learnt how to run DNS". Might be a tad elitist of me, I guess, but solid DHCP, routing, and DNS setup makes for way more reliable network than anything else.
DNS just tends to be part that is visible to random desktop user when things fail
>Might be a tad elitist of me, I guess, but solid DHCP, routing, and DNS setup makes for way more reliable network than anything else.
Depends on the network. If you are talking about a branch office, for sure.
>I find that a lot of "it's always DNS" falls down to "I don't know routing beyond default gateway"
I see it mostly with assumptions. Like DNS Server B MUST SURELY be configured the same as DNS Server A, thus my change will have no unexpected consequences.
Solid management of the services is important, yes. Also being prepared for when requirements change. I remember to this day when a bunch of small (rack-scale) deployments suddenly needed heavy-grade DNS because one of the deployed projects would generate a ton of DNS traffic. My predecessor set up dnsmasq, I didn't have a reason to change it before that, afterwards we had to setup total of 6 DNS servers per rack (1 primary authoritative, 2 secondary updating themselves from authoritative, 3 recursive).
I would say situation also changes a lot if you know/can deploy anycast routes for core network services - for example fc00::10-12 will always be recursive nameservers, and you configure routing so that it picks up the closest one, etc.
I have one customer, who runs a large wireless network, who when trying to design it has obviously taken lessons from US WISP operators on facebook managing trailer parks, and implemented a really low rent high complexity customer hand off circuit from that paradigm for one of his most valuable business customers. It worked, but was actually way more complex than he needed.
most networking people will, as a rule, make networks as complicated as possible
Yeah, we are kinda like that. So many toys...why can't I use them all.
Seriously, tho...the worst is when you go in and you can tell immediately "oh, the guy running this is trying to get his CCIE cert", because there's all sorts of weirdness you'd never/rarely do in a prod network, but it's on the cert test so lets try it out. YOLO!
Why have one layer of NAT when you can have four or five? Why not invent a bespoke addressing scheme? Why not cargo cult the documentation and/or scripts and config files from Stack Overflow or ChatGPT?
Under-engineering is an easier problem to solve than over-engineering.