Nice nostalgic piece. I have a lot of fond "favorite chip" memories from that era.
Not to be too pedantic, I would contend that at the time, it was pretty clear to enthusiast what the differences were. Everyone in the industry was paying attention to 486s and the cost of a genuine intel chip. The FDIV bug was on every Evening News for weeks. AMD and Cyrix vs intel debates were common.
I agree that it is not obvious now that Pentium came after 486, but at the time, it was clear.
My dad managed some key school district functions across the Mojave desert and we had a little place near US-395 from which we could go fishing, hiking, or travel to go skiing. CBS News radio on was our constant companion across long roadtrips across the entire west, even when casettes and CDs abounded.
News would keep us informed and Mystery Theater would keep us entertained on the drive back home, exhausted, on Sunday evenings.
He's old now, but drove to Wyoming to fish last year from southern California. He said the fishing wasn't any good when he got there, so he turned around and came home before the next day dawned. I wonder what he will listen to this Spring.
0.1 GB per full-attention layer and "The model has 60 transformer layers: 45 GatedDeltaNet (linear attention) + 15 standard full attention." So, 1.5 GB.
It's not a downgrade to security for any password length:
- If it's so short that the knowledge of the length makes bruteforcing noticeably faster, the password is so short that the total length taken would be very short regardless.
- In all other cases, it removes such a small fraction of time needed (on the scale of removing one age-of-the-universe from a process that would otherwise take thousands of ages-of-the-universe) that it doesn't change any infeasible timescale to a feasible one.
So either the information isn't needed, or it won't help. So not a security decrease.
I noticed that, too. However, I will say that having a couple weeks to watch Microsoft through the lens of the original post, I am inclined to adopt it as my current model for Microsoft's actual agenda.
As a result, I do not currently think that Microsoft is consumer-oriented. They have reinforced my opinion by doing anti-consumer changes in XBOX and then saying that they were pro-gamer. Seems like a pattern.
Maybe they will prove me wrong; I am sun-setting my final host that's running their software soon.
I am not convinced that Microsoft is all of a sudden deciding to try again to become a consumer-oriented company based on something Pravan Davuluri says.
It reads like a user that tried Wayland again last week, found the same issues and wrote a piece that tried to summarize why they remain sad after 17 years of waiting for Wayland to address its issues.
In X11, the problem was Xserver. Now, X11's design philosophy was hopelessly broken and needed to be replaced, but it wasn't replaced. As you correctly point out, there is no "Wayland", Wayland is a methodology, a description, of how one might implement the technologies necessary to replace X11.
This has led to hopeless fracturing and replication of effort. Every WM is forced to become an entire compositor and partial desktop environment, which they inevitably fail at. In turn application developers cannot rely on protocol extensions which represent necessary desktop program behavior being available or working consistently.
This manifests in users feeling the ecosystem is forever broken, because for them, on their machine, some part of it is.
There is no longer one central broken component to be fixed. There are hundreds of scattered, slightly broken components.
I maintain Red Hat backed it as part of a play to make it harder to develop competing distros that aren’t basically identical to Red Hat’s product.
Their actions on systemd, Wayland, plus gnome and associated tech, sure look like classic “fire and motion”. Everyone else has to play catch-up, and they steer enough incompatible-with-alternatives default choices that it’s a ton of work and may involve serious compromises to resist just doing whatever they do.
Wayland is far more aligned with the Unix philosophy than Xorg ever was. Xorg was a giant, monolithic, do everything app.
The Unix philosophy is fragmentation into tiny pieces, each doing one thing and hoping everyone else conforms to the same interfaces. Piping commands between processes and hoping for the best. That's exactly how Wayland works, although not in plain text because that would be a step too far even for Wayland.
Some stuff should not follow the Unix philosophy, PID 1 and the compositor are chief examples of things that should not. It is better to have everything centralized for these processes.
In X you have server, window manager, compositing manager, and clients and all is scoupled by a very flexible protocol. This seems nicely split and aligned with Unix philosophy to me. It also works very well, so I do not think this should be monolithic.
This is quite wrong? There are some features that get blocked from being implemented because Wayland refused to define a protocol for everyone to implement. Window positioning being a recent example of how progress can get blocked for many years due to Wayland.
This is same cop out people use to talk about "Linux."
"No, Linux isn't bad, your distro/DE is bad, if you used XYZ then you wouldn't have this problem." And then you waste your time switching to XYZ and you just find new problems in XYZ that you didn't have in your original distro.
I'm genuinely tired of this in the Linux community. You can't use the "Wayland" label only for the good stuff like "Wayland is good for security!" and "Wayland is the future" and then every time someone complains about Wayland, it is "no, that's not true Wayland, because Wayland isn't real."
But that's what we signed up for in the Linux wirld. Linux systems are smorgasbord of different components by design, and that means being specific. I'm using KDE Plasma 6, that's a different experience than someone using Cosmic or Sway.
Furthermore, Wayland is, first and foremost, a protocol, not a standalone software like the Linux kernel. Wayland is no more than an API format transmitted over the Wire protocol. So properly criticizing Wayland is about criticizing the abstraction this API creates and the constraints introduced by it.
Could you briefly explain in simple terms, why I as a user would care about any of that? I want stuff to work. With Wayland, it largely doesn't. I don't terribly care about the semantics of it.
They are in a completely different league when you account for the other full half of the story you missed:
"The company claims 5C supercharging capability, with a 10% to 80% charge completing in about 11 minutes."
Assume your worst case of 350 miles, 80% of that is 280 miles. Getting to 280 miles of no-exaggeration-real-world range in 11 minutes is actually game changing.
An 11 minute break after each chunk of 280 real miles of continuous driving does not feel like an interruption on a road-trip. 33 minutes every 200 miles definitely does.
11 minutes once per week to cover 5 days of 30 real miles of each-way commute is a forgettable amount of time.
True and false. You don't need anywhere near a million samples to get a good approximation for your normal distribution. Far fewer than 100 is sufficient (and 14 is a fine place to start if you are really constrained on data and need to get to 90-10).
Not to be too pedantic, I would contend that at the time, it was pretty clear to enthusiast what the differences were. Everyone in the industry was paying attention to 486s and the cost of a genuine intel chip. The FDIV bug was on every Evening News for weeks. AMD and Cyrix vs intel debates were common.
I agree that it is not obvious now that Pentium came after 486, but at the time, it was clear.
reply