It's not the deficit itself, it's the quantitative easing that is used to pay for most of the deficit. If the US dollar weren't a reserve currency, printing more money would have a much larger inflationary impact.
If you're using a US English keyboard layout, it's the default and you won't have to deal with changing it.
The most likely reason it would be a problem then would be that some Raspberry Pi images have defaulted to British English keyboard layouts. Otherwise you may be sailing through life unburdened by what can be a major pain to anyone anywhere else in the world, like a resident of Arizona wondering why the rest of the world keeps messing with their clocks.
Hah, I submitted the same story here just a few hours before you posted that. I don't know whether that's actually an example of this particular problem or not, though, since I'm not sure whether they have a website that can display the tickets.
I don't see any evidence that is a user-driven change.
For years now, often multiple times with the save vendor, I've been installing some vendors software, using it to complete a purchase that I had started in a web interface, then uninstalling the software, all so I could take advantage of ann unrealistically good promotion. I'm not talking about the type of savings that might be in an exceptionally good holiday promotion, that eats into most of, if not all of, the margin in the transaction. I'm talking about the type of promotion that would be used to promote a credit card, banking account, or gambling platform-- the kind of promotion that costs months worth of income from a customer but is worthwhile because the customer will be milked for years to come.
This appears to be more related to modern security features that lock the vendor out of your computer, but lock you out of your phone, shifting which interface gives the vendor the advantage in future transactions.
Who cares about usability any more? It's more important to make interfaces that show off their modernism, by hiding interactive elements behind a flat design. Some users have figured that they can work around this by using keyboard navigation, but you can defeat them by throwing an accessibility overlay on it, which makes keyboard navigation difficult, too.
tl;dr: The lie here is the assumption that the US has, or has ever had, a free market for wired internet service providers.
The article initially does a good job of describing the situation, but gets a bit confused when it gets to the history of the US, especially this line "This is what happens when you let natural monopolies operate without oversight." What it's discussing is not natural monopolies; it's discussing public utilities which are granted monopolies expressly through regulation, not despite it. Also, the US has a lot of oversite on wired ISPs. The prices are almost always approved by regulators.
A good example of a natural monopoly is Google search. It's pretty common for people to get frustrated by it, and look for other search engines. There's also multiple companies trying to compete with it. Normally this would mean that users would migrate to the competitors, but Google's search algorithms have been so good that practically every user has stayed with Google.
Natural monopolies are still easily disrupted, if the naturally-occurring barrier changes. For example, Internet Explorer had a natural monopoly, due to Microsoft's "embrace and extend" strategy giving it many capabilities that other web browser didn't have. When the internet market quickly migrated from a feature-first market to a security-first market, Internet explorer was quickly overtaken by Chrome and Firefox. There's a reasonable chance the same thing will happen with Google Search, as the market for it's search algorithm is overtaken for the marked for LLM based web searches, which Google is pretty bad at.
Anyway, the reason Comcast or Charter is the only one that provides cable internet in your area isn't because it's too expensive for anyone else to deploy cables. At the margins they operate, it would be well worthwhile to invest in a parallel infrastructure, but it's downright prohibited almost everywhere in the US. In fact, they may own the rights to lay cable, despite having never laid any. This is the case where I live, for the phone company, which plays by similar rules.
Fixed-wireless internet providers are starting to provide some competition, as backhauls have improve enough that cellular providers can compete with wired internet providers. T-Mobile is currently offering $20/mo fixed wireless add-on plans, with a five-year price guarantee. To complete with the fixed-wireless market, Comcast has launched a service called NOW Internet, which starts at $30/mo with a similar price guarantee and no no add-on requirement.
Speaking of "starting at", a large source of high prices is the common use of FUD to pressure users into paying for more than they need, or can even use. Very few households peak at more than even 40 Mbps (https://www.wsj.com/graphics/faster-internet-not-worth-it/) and the starting price of almost every provider is above that, but must customers have been talked into higher-tier plans.
The only web hosts that regularly provide data faster than that are video game distributors, so if you are in the type of household that would like to download game updates in minutes, instead of tens of minutes, while also watching multiple 4K video streams, then comparing other plans may be worthwhile, otherwise stick with the absolute cheapest plan available from all providers that serve your area. (And, if you are big on multi-player gaming, selecting the ISP with the lowest latency will be beneficial, but all plans from a given service will be the same latency.)
> The only web hosts that regularly provide data faster than [40Mbps] are video game distributors
No? I've been trying to download my MyMiniFactory library[0] and I'm currently getting 25MBps over 5 downloads. A single download will easily do 15MBps.
[0] Which sucks, even at high speed - they have no API, no bulk download, and you're limited to 6 items at a time. I have to click through 1000+ items with easily 5000+ sub-items and individually download each one.
Aren't NPUs only designed to run on small models? From whast I've seen, most NPUs don't have the architecture to share workloads with a GPU or CPU any better than a GPU or CPU can share workloads with each other. (One exemption being NPU instructions that are executed by the CPU, e.g. RISC-V cores with IME instructions being called NPUs, which speed up operations already happening on the CPU.)
You can share workloads between a GPU, CPU, and NPU, but it needs to be proportionally parceled out ahead of time; it's not the kind of thing that's easy to automate. Also, the GPU is generally orders of magnitude faster than the CPU or NPU, so the gains would be minimal, or completely nullified by the overhead of moving data around.
The largest advantage of splitting workloads is often to take advantage of dedicated RAM, e.g. stable diffusion workloads on a system with low VRAM but plenty of system RAM may move the latent image from VRAM to system RAM and perform VAE there, instead of on the GPU. With unified memory, that isn't needed.
reply