Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

Everything at the lowest levels needs to be tightened up now.

Buffer overflows in trusted code have to go. This means getting rid of the languages with buffer overflow problems. Mostly C and C++. Fortunately we have Go and Rust, plus all the semi-interpreted languages, now, and can do it.

We need something that runs Docker-like containers and, all the way down the bare metal, has no unsafe code. We need dumber server boards, with BIOS and NIC code that's simpler and well-understood. The big cloud companies, Amazon, Facebook, and Google are already doing their own server boards.

Companies which put in "backdoors" should face felony criminal prosecution. That doesn't happen by accident.

Latest CERT advisory: "Vulnerability Note VU#936356 Ceragon FiberAir IP-10 Microwave Bridge contains a hard-coded root password ... Ceragon FiberAir IP-10 Microwave Bridges contain an undocumented default root password. The root account can be accessed through ssh, telnet, command line interface, or via HTTP. ... CERT/CC has attempted to contact the vendor prior to publication without success."

All Ceragon customers should demand their money back, and their products should be seized at US customs as supporting terrorism.



> Buffer overflows in trusted code have to go. This means getting rid of the languages with buffer overflow problems.

In the meantime, since moving away from C will take years, we need to invest in better exploit-mitigation technology instead of relying on bug-hunting-driven-security. That means OS/kernel developers need to start taking security seriously and keeping up with attackers. This means adding proactive measures instead of slowly reacting only when a new CVE comes out. Which sadly far from the reality at the moment.

For example, OpenBSD made headlines for adding W^X to the whole kernel but hackers have already been bypassing W^X on iOS for years:

http://bsd.slashdot.org/comments.pl?sid=6723643&cid=48812833

>> These protections may guard against a (very small subset of) casual attackers, but they're just another minor hurdle for determined attackers.

In addition we need to move away from signature-based AV towards host-based intrusion detection systems (HIDS). It is not accident that all the feds who left government cybersecurity jobs in recent years moved to build private companies creating HIDS products and making millions selling them to big corps (FireEye, Crowdstrike, etc).

The only options available for consumers and the average sys admin are security tools easily bypassed by any semi-sophisticated adversary (for ex: Anti-virus/RKhunter/SELinux/most trusted computing code-integrity systems/etc).


I agree - I've just made a simple promise pledge, as a ceremony to commit to not using C/C++ for new projects: http://www.flourish.org/promise/


One problem with most "interpreted languages" you cite as preferable is that they rely on C/C++ run-times for now. This means these languages are only as safe as the underlying C/C++ run-times and the core libraries they rely upon (glibc and the like.)

We need to start to think of replacing underlying runtimes and core libraries with Rust and Go-based alternatives (or similar) to make them safer. Ultra-large goal and probably impossible in the near term, but it should be done.


It may not be that big a task. How much code do you really need to run a container instance? If you could get an airtight Xen-like system, you might not need much of an OS inside each container. Xen already does memory allocation, CPU dispatching, I/O handling, timer handling, and message passing, which is all an OS really needs.

Rust programs running on "libnative" do not, I think, use "libc" any more.


The chair of the computer science department at my university liked to say that CS could just as easily be considered the Computer Security department. This guy wasn't a pragmatic programmer whatsoever, he always claimed he didn't even know how to code (he was a theory guy), but he was right.


I wonder, how switching to different programming language would have prevented backdoor in network hardware?

Juz 16 is right that the companies concerned should face criminal prosecution, this is not technological problem and technology cannot (ultimately) solve it.


>Companies which put in "backdoors" should face felony criminal prosecution. That doesn't happen by accident.

Hard to do that when refusing to put in a backdoor can have even worse consequences.


Removing whole classes of software bugs, even if the cost were bearable, won't make a huge dent.

If you fix the software, the NSA will just backdoor the firmare/hardware (if they haven't already) and, even on an Android phone running AOSP, there are literally still millions of lines of proprietary, unaudited, closed-source code. Not to mention half dozen microprocessors containing circuitry you can't possible inspect. PCs are the same these days. Do you really expect these multi-billion $ industries to change? There's only a niche commercial interest for consumers of 'simple and well-understood'(read: secure) firmware.

And it's not like the hardware/consumer industry is sitting pretty as a lone miserable failure from a security sensitive standpoint. Then entire Internet stack from Ethernet and BGP up the way up to HTTP is complete and utter garbage. Our standards bodies (IETF, W3C etc) are failing to protect us and there's no sign of it getting better because the cost of starting over is simply too damn high.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: