Oh, so having future commercial packages that many people depend on, like Adobe Photoshop or 3D Studio Max ... packaged and distributed as a universal binary for every major distribution and every architecture ... is of no importance whatsoever?
This is a major roadblock for commercial third-parties that would like to distribute ports of their software for Linux.
You can do this without fatelf. Put everything you have into a single package and add "shell scripts and flakey logic to pick the right binary and libraries to load". (He fails to mention, that fatelf is nothing more than this, only hardwired into kernel.)
What you are demonstrating is the userland toolchain. What was suggested above is an improvement by not requiring meddling with the kernel. Nothing prevents a userland toolchain for Linux from supporting what you are describing above without kernel modifications.
What you are demonstrating is the userland toolchain.
No, what I'm demonstrating is an end-to-end architecture-transparent platform, which includes a complete userspace toolchain, a universal binary format, and necessary kernel and dynamic linker support.
What was suggested above is an improvement by not requiring meddling with the kernel
I'm not sure I understand how "meddling" in the kernel is a bad thing when it provides for user-transparent execution of multi-architecture binaries, including transparent emulation of binaries that lack support for the host architecture.
It's not as if it's complex or dangerous to parse the Mach-O or FatELF formats, and if you take a page from Mac OS X or qemu, you can even do transparent emulation in userspace.
Nothing prevents a userland toolchain for Linux from supporting what you are describing above without kernel modifications.
How is it useful to build an easy-to-use multi-architecture binary if the kernel can't actually execute it?
Why are you afraid of doing simple parsing[1] of the binary? The kernel already does this -- how do you think ELF loading and shell script execution works?
I can imagine this conversation 30 years ago[2] -- "Why should we add shell script shebang parsing/execution support to the kernel? Why not just glue together loader executables that load the shell script"
It shouldn't be hard to make an binfmt_misc handler that would rip the correct elf from the fatelf thing (or rather a simple tar) and feed it to kernel, so you can just type ./fatelf-something. You can also make it a shell script with the binary in it, that will choose the correct one and execute it. Both these solutions works without a single kernel line (really), and I can't see anything that fatelf has and these doesn't.
If a proprietary vendor is willing to do all the work to cross-compile, package, and test on a variety of architectures and distributions, I can't believe they would be deterred by the need to link a binary for each package. With FatELF they would have to do that anyway, and then glue all those binaries together before building all their packages.
Yes, and NeXT could have asked its ISVs to do the same thing back when they ran on 040, x86, SPARC, and PA-RISC.
The motivation, though, was not just to make things easier for the ISVs, but also to make things easy for the ISVs' customers. Simplifying the process of buying and installing software was the real thrust of the endeavor. [1] There is real economic value in making things easy for customers.
Why, for example, would web sites offer 1-Click ordering, when entering a credit card number is just a few more keystrokes? Because simplicity makes money flow, and is good for a market.
The Year of the Linux Desktop will never come until these issues become important to the community. Which is to say that it will never happen, because the community, as a whole, has no economic incentive to lower the bar for customers like this.
Ryan Gordan probably could have figured that out sooner, but I'm glad he was an optimist for a little while, at least.
[1] There was also a secondary benefit for system administrators. You could install a single copy of a 4-way-fat-binary on an NFS share and a workstation running NeXTstep of any architecture could launch it over the network.
Except, linux users don't use four platforms on the desktop. They use (nearly entirely) 1. Those use x86-64 are almost entirely clued in enough to grab the correct architecture. And even if they weren't, you could get around this simply by making your installer a shell script, that choose the right dpkg based on the output of uname -m. Hardly overly complex.
I guess that's fine, as long as nobody has ambitions for Linux adoption to grow beyond the relatively small already-clued-in demographic. There's nothing wrong with wanting to stay small.
FWIW, I used Debian/PPC at home for years. But, then, I was also among the small set of people who actually ran NeXTSTEP on a "gecko" PA-RISC machine and a SPARC laptop made by Tadpole. That's the story behind my perspective, anyway.
This may be the achilles heel of the freenix world: since almost anything can be worked-around with some scripting, people actively argue against putting facilities like this at the right place in the system's architecture. Add 1 to the Cathedral's score.
... Except why exactly is kernel and libc the right place to put this in?
Remember that stuff in kernel works outside memory protection. Moving complex stuff away from the kernel is a win pretty much always when there are no pressing performance reasons to do otherwise. This code will only run once every time a program is started, so performance is certainly not a good reason to put it in kernel.
What I really want to reiterate here is that using fat_elf:s would in absolutely no way make shipping stuff to multiple different linux platforms easier. The reason it is hard is because when someone says linux, pretty much the only guarantee you have about the system is that it runs a linux kernel, and even that can potentially be so old or be so strange that you can't trust anything about it. All fat_elf gives us is stuffing multiple binaries in a single file, and we can do this already without much fuss. It does not in any way give us true "Universal Binaries", because to do so, we would have to either agree on a common subset of libraries a linux system should always ship (and agree on indefinite binary compability for those libraries), or ship a meaningful portion of the entire platform in every binary.
If you want what fat_elf gives, you can get it with a 20-line shell script you concatenate into a bunch of binaries, and it could be argued that it is the cleaner and better place to put it in, considering it's only an ugly hack anyway. It's just that people look at fat_elf and see something that isn't there.
The architectural location that I was alluding to is some fat binary format itself, because this has the greatest impact on customer and user experience. How you parcel out the supporting code between kernel and userland would flow from that invariant...if I were Linus for a day.
Or, perhaps even better, adopt a small, architecture-neutral IR like LLVM-BC and a workstation of any architecture could choose to compile it just-in-time at launch, or at install time.
Either way, the greatest tragedy here is small thinking. I would love to see Linux rise to the level where users don't need to know what an instruction-set architecture is.
This is a major roadblock for commercial third-parties that would like to distribute ports of their software for Linux.