HN2new | past | comments | ask | show | jobs | submitlogin

I know we're probably ten years or more out from it (because it'd break way too much software ... I have an application I want to use but can't due to a 16-bit installer ... in 2016); but I really hope Microsoft eventually puts out a 64-bit only operating system. It would be lovely to be rid of SYSWOW64, Program Files (x86), the alternate shadow registry, ~3GB of wasted space for all the 32-bit code, etc.

I intentionally leave off the 32-bit compatibility with my FreeBSD systems, and it's lovely. Of course, it's so much easier there with everything being open source.



Since at least Windows Server 2008 R2 in Server Core mode you can uninstall 32-bit compatibility:

https://msdn.microsoft.com/en-us/library/windows/desktop/dd3...

Additionally Nano server does not have it.

Not sure if it is yet possible to remove from the desktop yet, but I'm doubtful.


Not yet. There are still way too many programs that are 32 bit only.


I feel just the opposite. I have a pile of the most powerful general-purpose commodity hardware ever built, and it's a damning indictment of the industry that it can no longer run software that used to work fine on tiny machines.


I disagree. Things that made sense on a tiny machine, like the original 8086's wraparound at 1 MB led to hacks like the A20 gate [1]. Or timing loops that were dependent on a processor running at 4.77 mhz, requiring things like the turbo button. Compatibility is a balancing game, because you end up needing to keep bugs lying around that otherwise would be fixed at some point in a newer architecture. I'm more of the opinion that we're much better off explicitly emulating those areas, so they are run in their own context and it's a lot easier to mitigate damage. While compatibility is a positive thing, improvement is a much more positive thing.

[1] https://en.wikipedia.org/wiki/A20_line#A20_gate


I agree that virtualization is better than support for old instruction sets in new cores, but users should expect that virtualization to be ever-present and reliable rather than mutely accept when tools randomly stop working.


> and it's a damning indictment of the industry that it can no longer run software that used to work fine on tiny machines

On the contrary, I think it's an indictment of the industry that we still put effort in hardware into running ancient software and continue to burden the most widespread architecture in the world with 1970s nonsense when virtualization is the obvious solution.

Plenty of people want to run games for the 6502 architecture on their PCs, yet Intel would be out of their mind if they added a 6502 to the die. Why do we not treat the Intel 8086 the same way?


Why? Most of the older environments can be perfectly replicated in virtualized form...


Not perfectly enough for some uses.

For example, AFAIK there's still no emulator that can correctly run the 8088 mph demo.


There's also no PC manufactured in the past two decades that can run it. Real mode has nothing to do with the challenges necessary to emulate that demo.

(There's also the fact that the 8088 MPH demo is not a particularly compelling use case, even if the hardware were present.)


If they wanted to, they should have done it in 10. They can't now, because they promised to support all the devices it could be installed into for their lifetime. Some of those devices are 32bit.


Nanoserver is it. No 32 bit, and no other legacy.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: