Broadly agree but, as is most things, the devil is in the details!
- Xcode. A really rough ide that has a hard time at scale, choking on package refreshes, many targets, and more. It has a special entitlement so you can't even binary patch it if you want to fix it!
- Build systems. Cargo is _much_ easier to work with than SPM.
- Macros support, codegen is still largely done outside of the macro system, which should indicate its use.
- Linter / format support. Yeah, it exists, last I checked it's just a good bit worse.
- Performance. There are MANY performance cliffs in Swift; most can be fixed by a sufficiently determined compiler developer, but at this point we've kinda departed talking about the language as-is.
- Type inference time. Swift's bidirectional type inference causes a ton of choking on complex expressions, which is a real problem with its number one use case, SwiftUI.
- An exacerbating factor on the above, imports are all implicitly module-scoped, meaning that changing a single file means recomputing the types for all files in the module. And because SPM and Xcode have such a rough time with multiple targets, that usually means that a single change can lead to recompiling all Swift files.
- Weirdness around classes and structs? I understand that they had to do it for objc compatibility, but I would've found it much cleaner if they'd just from the start had something replacing class, like a fully-sugared `final class Box<T>` that replaces all uses of class.
I agree that for the most part it _could_ be an easier rust, but between including bidirectional type inference without a cut operator and poor tooling I struggle to find where it's actually easier in cases that you can't just use typescript and dodge all the non-typecheck compilation headaches entirely.
I think the table at the end of the article is more so.
- Worldwide sales -10% YoY
- China sales -26% YoY
And when you cross compare Porsche saying they sold more EV powertrains than their gas equivalents against China's new found foothold as the market leader in consumer electric cars (BYD, NIO, Xiaomi, etc...)
Then I think you see an early indication not just of electric car dominance, but of the (very potential) rise of China as the premier automotive super power.
> Then I think you see an early indication not just of electric car dominance, but of the (very potential) rise of China as the premier automotive super power.
It’s done man. Americans are stuck in ICE engines because they’ve been told they’re “car enthusiasts” while the Chinese have been developing EV technology for years. Meanwhile, European makers are stuck not knowing what to do, make Americans happy or compete with the Chinese. The result: nothing has been done properly. And let’s be real, “car enthusiasts” are going to disappear in one or two generations. Practicality beats enthusiasm for 95% of car use.
German cars have lost their technological edge. They can't even build their own infotainment systems anymore. They're paying billions to China to do it for them.
I can't overstate how catastrophically stupid this is. Paying what they consider smaller competitors real cash to build core software, instead of developing that capability in-house or acquiring a few startups with decent engineering talent.
This isn't just a bad decision. It reveals a completely dysfunctional decision-making process and a total absence of technical ambition.
People who say but "Porche/Mercedes/etc.." has this design. Luxury segment is not coming from nowhere. This is the same reason british luxury cars are gone essentially. It will take some time, but EU built cars will be in a constant decline.
What's even more fun, they don't want to protect their own market the same way chinese did.
A lot of Americans believed the guy they wanted to believe in, because they didn't want to believe the people they didn't want to believe in.
You're assuming that modern politics across most of the World has something to do with rational, logical thought. Russia, China, Europe, the US, the Middle East - they are all in a quagmire of irrational fractures between the public and the political classes who want power/control for benefit of themselves rather than for the benefit of that public.
It's not unique to the US, it's just that they look like they are speed running it from outside.
> At over 2,500 t/s, Cerebras has set a world record for LLM inference speed on the 400B parameter Llama 4 Maverick model, the largest and most powerful in the Llama 4 family.
This is incorrect. The unreleased Llama 4 Behemoth is the largest and most powerful in the Llama 4 family.
As for the speed record, it seems important to keep it in context. That comparison is only for performance on 1 query, but it is well known that people run potentially hundreds of queries in parallel to get their money out of the hardware. If you aggregate the tokens per second across all simultaneous queries to get the total throughput for comparison, I wonder if it will still look so competitive in absolute performance.
Also, Cerebras is the company that not only was saying that their hardware was not useful for inference until some time last year, but even partnered with Qualcomm with the claim that Qualcomm’s accelerators had a 10x price performance improvement over their things:
Their hardware does inference with FP16, so they need ~20 of their CSE-3 chips to run this model. Each one costs ~$2 million, so that is $40 million. The DGX B200 that they used for their comparison costs ~$500,000:
You only need 1 DGX B200 to run Llama 4 Maverick. You could buy ~80 of them for the price it costs to buy enough Cerebras hardware to run Llama 4 Maverick.
Their latencies are impressive, but beyond a certain point, throughput is what counts and they don’t really talk about their throughput numbers. I suspect the cost to performance ratio is terrible for throughput numbers. It certainly is terrible for latency numbers. That is what they are not telling people.
Finally, I have trouble getting excited about Cerebras. SRAM scaling is dead, so short of figuring out how to 3D stack their wafer scale chips, during fabrication at TSMC, or designing round chips, they have a dead end product since it relies on using an entire wafer to be able to throw SRAM at problems. Nvidia, using DRAM, is far less reliant on SRAM and can use more silicon for compute, which is still shrinking.
They're pretty much only known for "Oh Yeah" which was used in "Ferris Bueller's Day Off", but their albums are full of fabulous stereophonic productions.
I like how people in comments are keen to change the world, but I - more realisticly - only focus on gaming the system so I can actually save myself couple bucks right away.
I set pick up and destination, exit the app, open another rides app, wait few minutes for uber to notify me that the price went down.
I only give it initials (instead of full name) and phone number, not even my gender, I rarely rate drivers positively, if it is not a negative experience, I skip reviewing, so they don't know I "like" the service.
When it takes more than a minute to find a ride, I cancel the ride and choose the "others" option, as this is the de-facto the option for "I will just take a cab", so I get inserted on the "churn risk" list.
I use a virtual card that I some time leave empty so payment fail after the ride, and on the next ride I readjust my virtual card limit on the next ride and pay the last bill so I am added on the "poor and miserable riders" list.
A top of the line Zen core is a powerful CPU with wide SIMD (AVX-512 is 16 lanes of 32 bit quantities), significant superscalar parallelism (capable of issuing approximately 4 SIMD operations per clock), and a high clock rate (over 5GHz). There isn't a lot of confusion about what constitutes a "core," though multithreading can inflate the "thread" count. See [1] for a detailed analysis of the Zen 5 line.
A single Granite Ridge core has peak 32 bit multiply-add performance of about 730 GFLOPS.
Nvidia, by contrast, uses the marketing term "core" to refer to a single SIMD lane. Their GPUs are organized as 32 SIMD lanes grouped into each "warp," and 4 warps grouped into a Streaming Multiprocessor (SM). CPU and GPU architectures can't be directly compared, but just going by peak floating point performance, the most comparable granularity to a CPU core is the SM. A warp is in some ways more powerful than a CPU core (generally wider SIMD, larger register file, more local SRAM, better latency hiding) but in other ways less (much less superscalar parallelism, lower clock, around 2.5GHz). A 4090 has 128 SMs, which is a lot and goes a long way to explaining why a GPU has so much throughput. A 1080, by contrast, has 20 SMs - still a goodly number but not mind-meltingly bigger than a high end CPU. See the Nvidia Ada whitepaper [2] for an extremely detailed breakdown of 4090 specs (among other things).
A single Nvidia 4090 "core" has peak 32 bit multiply-add performance of about 5 GFLOPS, while an SM has 640 GFLOPS.
I don't know anybody who counts tensor cores by core count, as the capacity of a "core" varies pretty widely by generation. It's almost certainly best just to compare TFLOPS - also a bit of a slippery concept, as that depends on the precision and also whether the application can make use of the sparsity feature.
I'll also note that not all GPU vendors follow Nvidia's lead in counting individual SIMD lanes as "cores." Apple Silicon, by contrast, uses "core" to refer to a grouping of 128 SIMD lanes, similar to an Nvidia SM. A top of the line M2 Ultra contains 76 such cores, for 9728 SIMD lanes. I found Philip Turner's Metal benchmarks [3] useful for understanding the quantitative similarities and differences between Apple, AMD, and Nvidia GPUs.
People who sit in their cars an hour every morning, stressed by traffic and pollution before they even set foot in the office, who then attend back-to-back meaningless meetings and email and slack, and then get back in the goddamn car again for another hour if they're lucky, think everyone who doesn't perform this unhealthy and destructive ritual is a slacker.
More than 250W continuous is pointless in many countries because that's the limit for the most common e-bike class (i.e. even if the system can do more, you'd want to de-rate it to 250W).
Bosch makes excellent motors, but they're ripping people off on accessories (a dumb charger costs 89 EUR in the "compact" variant that can do 2A or around 80W and weighs 600g, needing over 10 hours to charge a large battery), and enabling this with aggressive DRM (which also means updates can only be done by a repair shop). They're also trying to make your bike a subscription, of course.
So even if all DJI does is become competitive, that's already a win. Bonus points if they actually let you tune (some) motor parameters yourself. The hard part seems to be reliability (a bike that gets mud, water and washing together with constant vibration is a pretty harsh environment) and many Chinese motor brands have a less than stellar reputation there, and because this is not something easily testable, building a reputation (and gathering the experience needed to actually build good products) takes time.
The best advice I can give you is to use bigserial for B-tree friendly primary keys and consider a string-encoded UUID as one of your external record locator options. Consider other simple options like PNR-style (airline booking) locators first, especially if nontechnical users will quote them. It may even be OK if they’re reused every few years. Do not mix PK types within the schema for a service or application, especially a line-of-business application. Use UUIDv7 only as an identifier for data that is inherently timecoded, otherwise it leaks information (even if timeshifted). Do not use hashids - they have no cryptographic qualities and are less friendly to everyday humans than the integers they represent; you may as well just use the sequence ID. As for the encoding, do not use base64 or other hyphenated alphabets, nor any identifier scheme that can produce a leading ‘0’ (zero) or ‘+’ (plus) when encoded (for the day your stuff is pasted via Excel).
Generally, the principles of separation of concerns and mechanical sympathy should be top of mind when designing a lasting and purposeful database schema.
Finally, since folks often say “I like stripe’s typed random IDs” in these kind of threads: Stripe are lying when they say their IDs are random. They have some random parts but when analyzed in sets, a large chunk of the binary layout is clearly metadata, including embedded timestamps, shard and reference keys, and versioning, in varying combinations depending on the service. I estimate they typically have 48-64 bits of randomness. That’s still plenty for most systems; you can do the same. Personally I am very fond of base58-encoded AES-encrypted bigserial+HMAC locators with a leading type prefix and a trailing metadata digit, and you can in a pinch even do this inside the database with plv8.
diesel engine mechanic here. ive been following the tesla "truck" thing for about a year now and it seems pretty miserable. i dont have the sources, but from several blogs and trade rags (trucking news, professional driver, etc..) ive read the situation seems like it wasnt very well thought out.
- first callout was OTR longhauls. Musk clearly didnt want his trucks stuck as lot-tenders where they would work brilliantly, he wanted publicity on the road.
- longhauls cancelled because obvious infrastructure limitations. batteries and motors failing more frequently due to the load and the range. reports of using tesla consumer vehicle components in the drivetrain. switch to regional routes
- regional routes failing between AZ/CA due to thermal and performance issues. range issues.
- professional drivers hate these trucks. worse than international (truck brand). center-seating makes visibility, toll booths and logbook checks a chore. side mirrors are static and too high.
Musk needs to come back to earth. OTR (over the road, long hauls of 1800 miles) is a non-starter and will never work with current technology. professional drivers can not lose 30 minutes every 400 miles on these routes to charge, they will miss all their drop times.
make the tesla semi truck a lot tender. massive power to realign and arrange heavy trailers all day long in a parking lot or warehouse lot. bonus points: make the tesla semi truck a driverless lot tender since lot-tenders dont need a CDL.
EDIT: If you want to see something I think the actual trucking industry is getting excited about, check out Edison motors. They're running a hybrid diesel/electric design that would work wonderfully for things like clean-air city driving where the batteries are getting topped up at every stop and speeds are under 50mph, and a small diesel generator when the batteries need a charge. this is similar to how Great Western Rail in the UK runs their trains (just with batteries) and still supports a traditional truck drivetrain. Its designed to be punished from what i can see...the primary application is as a logging rig.
First of all your percentage of ownership is unrealistic. I joined in November 2019 and got a grant of a few thousand RSUs that fully vested before I left, and that I still have most of, plus I bought some shares in a few rounds of our ESPP when that became available -- as of today I have just under 5,000 shares. HashiCorp has nearly 200 million shares issued, so I own a hair over .0025% of the company. Really early employees got relatively big blocks of options but nobody I knew well there, even employees there long enough to be in that category (and there were very few of them still around by December 2021), was looking at "fuck-you money" just from the IPO.
Second, the current price isn't the whole story for employees. I had RSUs because of when I joined so the story might have been different for earlier employees who had options, but I don't think it differs in ways that matter for this discussion. As background for others:
* On IPO day in December 2021, 10% of our vested RSUs were "unlocked" -- a bit of an unusual deal where we could sell those shares immediately (or at any later time). Note "vested" there -- if you had joined the day before the IPO and not vested any RSUs yet, nothing unlocked for you. (Most of the time, as I understand it, you don't have any unlocked shares as an employee when your company IPOs -- you get to watch the stock price do whatever it does, usually go down a lot, for six months to a year.)
* At a later date, if some criteria were met (which were both a report of quarterly earnings coming out and some specific financial metrics I forget), an additional tranche of vested shares (I think an additional 15%) unlocked -- I believe this was targeted at June 2022 and did happen on schedule.
* After 1 year, everything vested unlocked.
At the moment of the IPO the price was $80, but it initially climbed into the $90's pretty fast. At one point, during intraday trading, it actually (very briefly) broke just above $100.
So, if you were aware ahead of time that the normal trajectory of stock post-IPO is down, and if you put in the right kind and size of limit orders, and if you were lucky enough to not overestimate the limit and end up not selling anything at all, then you could sell enough shares while it was up to cover the taxes on all of it and potentially make a little money over that. I was that lucky, and managed to hit all of those conditions while selling almost all of my unlocked shares (I even managed to sell a small block of shares at $100), plus my entire first post-IPO vesting block, and ended up with enough to cover the taxes on the whole ball of already-vested shares, plus a few grand left over. Since then, I haven't sold any shares except for what was automatically sold at each of my RSU vesting events.
For RSUs not yet vested at the IPO, the IPO price didn't matter because they sold a tranche of each new vesting block at market price to cover the taxes on them when they vested -- you could end up owing additional taxes but only, as I understand it, if the share price rose between vesting and sale of the remaining shares in the block, so you would inherently have the funds to pay the taxes on the difference. (And if the price fell in that time, you could correspondingly claim a loss to reduce your taxes owed.)
There were a fair number of people who held onto all their shares till it was way down, though, and had to sell a lot to cover their tax bill in early 2022 -- I think if you waited that long you had to sell pretty much all your unlocked shares because the price was well down by tax time (it bottomed out under $30 in early March 2022, then rose for awhile till it was back up over $55 right before tax day, so again, if you were lucky and bet on the timing right, you didn't end up too bad off, but waiting till the day before April 15 was not something I bet a lot of people felt comfortable doing while they were watching the price slide below $50 in late February). I even warned one of the sales reps I worked with, while the price was still up, about the big tax bill he should prepare for, and he was certain I was wrong and that he would only be taxed when he sold, and only on the sale price. (He was of course wrong, but I tried...)
The June unlock was pretty much irrelevant for me because by that point the share price was down under $30 -- it spent the whole month of June after the first week under $35. The highest it went between June 30, 2022 and today, was $44.34. The entire last year it's only made it above $35 on three days, and only closed above $35 on one of them. I figured long-term the company was likely to eventually either become profitable, or get bought, and in either case the price would bump back up.
I was thinking about cutting my losses and cashing out entirely when it dropped below $30 after the June layoffs, and again in November when it was below $20, and then yet again when I left the company in January of this year, but the analyst consensus seemed to be around $32-34 through all of that so I held on -- kinda glad I did now instead of selling at the bottom.
Geohot? I know enough people at OpenAI to know 4 people's reaction at the time he started claiming 1T based on timing latency in the ChatGPT webui per token.
In general, not someone you wanna be citing with lengthy platitudes, he's an influencer who speaks engineer, he's burned out of every community he's been in, acrimonously.
I am a consultant now so it's a new company every few months.
There are groups of people you always make nice with.
* Security people. The kinds with poorly fit blazers who let you into the building. Learn these peoples names, Starbucks cards are your friends.
* Cleaning people. Be nice, be polite, again learn names. Your area will be spoltless. It's worth staying late every now and again just to get to know these folks.
* Accounting: Make some friends here. Get coffee, go to lunch, talk to them about non work shit, ask about their job, show interest. If you pick the right ones they are gonna grab you when layoffs are coming or corp money is flowing (hit your boss up for extra money times).
* IT. The folks who hand out laptops, manage email. Be nice to these people. Watch how quickly they rip bullshit off your computer or wave some security nonsense. Be first in line for every upgrade possible.
* Sysadmins. These are the most important ones. Not just because "root" but because a good SA knows how to code but never says it out loud. A good sysadmin will tell you what dark corners have the bodies and if it's just a closet or a whole fucking cemetery. If you learn to build into their platform (hint for them containers are how they isolate your shitty software in most cases) then you're going to get a LOT more leeway. This is the one group of people who will ask you for favors and you should do them.
I wonder if the tribes have enough autonomy to build transmission lines quickly. Just the Navajo Nation can build enough solar/wind and transmission lines within their reservation and probably connect to the grids in Colorado, New Mexico, Utah and Arizona. US is incredibly slow in building transmission lines, takes decades.
And, if they have enough autonomy to import Chinese panels (50% cheaper), a network of these nations can blanket the entire country with renewables.
I can't speak to the 2% figure, but as for the power sources, Bitcoin is somewhat unique, as it is ostensibly the most price-sensitive, most location-agnostic, and most interruptible instance of large-scale power consumption. Mining is done strictly for profit, so it only performed at any scale strictly where it is profitable.
Ironically, this nature can actually fortify the electric grid in some areas, such as Texas. Bitcoin mining businesses have discovered that the unreliability of the existing grid can be mitigated through vertical integration - they create renewable power generation facilities, and when the cost of electricity on the grid is low (because demand is low and supply is high), they use their own renewably-sourced energy for next to nothing.
When grid conditions deteriorate in Texas' deregulated energy market, wholesale electricity prices surge, as those are times when demand approaches or exceeds supply.
When that happens, the electricity being generated by these vertically integrated companies is worth more being sold to the grid than it's worth being used to mine bitcoin, so the miners all shut off (within milliseconds, as this is all automated), and the power that location generates starts getting sold to the grid, which increases supply, helping to lower the electricity prices, and to keep the lights on for everyday people.
It's not a magic bullet that fixes the entire grid, but there is a growing body of evidence saying that it helps grid reliability in Texas more than it hurts, and these vertical integrations are overwhelmingly done with renewable energy sources.
I'm sure the location-agnostic aspect of Bitcoin mining does lend itself to deployment in places where power is plentiful, but where there is little local demand, and the cost of transporting that power far away is cost prohibitive, though I don't have specific example of that.
That's one of the reasons I self-host. I just don't trust the cloud providers regarding clarity and transparency... even if my self-hosted solution was far less reliable, less secure and less performant (I imagine it's not ;), I probably wouldn't change.
I personally use immich[1], a very complete solution with iOS / Android App, Server-Component and Sync / Backup option.
2c: if you need PostgreSQL elsewhere in your app anyway, then store your event data in PostgreSQL + FOSS reporting tools (apache superset, metabase, etc) until you hit ~2TB. After that, decide if you need 2TB online or just need daily/hourly summaries - if so, stick with PostgreSQL forever[1]. I have one client with 10TB+ and 1500 events per sec @ 600 bytes/rec (80GB/day before indexing), 2 days of detail online and the rest summarized and details moved to S3 where they can still query via Athena SQL[2]. They're paying <$2K for everything, including a reporting portal for their clients. AWS RDS multi-AZ with auto-failover (db.m7g.2xlarge) serving both inserts and reporting queries at <2% load. One engineer spends <5 hours per MONTH maintaining everything, in part because the business team builds their own charts/graphs.
Sure, with proprietary tools you get a dozen charts "out of the box" but with pgsql, your data is one place, there's one system to learn, one system to keep online/replicate/backup/restore, one system to secure, one system to scale, one vendor (vendor-equivalent) to manage and millions of engineers who know the system. Building a dozen charts takes an hour in systems like preset or metabase, and non-technical people can do it.
Note: I'm biased, but over 2 decades I've seen databases and reporting systems come & go, and good ol' PostgreSQL just gets better every year.
[1] if you really need, there's PostgreSQL-compatible systems for additional scaling: Aurora for another 3-5x scaling, TimescaleDB for 10x, CitusDB for 10x+. With each, there's tradeoffs for being slightly-non-standard and thus I don't recommend using them until you really need.
[2] customer reporting dashboards require sub-second response, which is provided by PostgreSQL queries to indexed summary tables; Athena delivers in 1-2 sec via parallel scans.
But most importantly, the attractive power that companies doing on-premise infrastructure have towards the best talent.