Besides my DA/Algo classes in College, I've never used C seriously. And you know, it's semantics like this that really make me go WTF lol....
From strtok man page... "The first time that strtok() is called, str should be specified; subsequent calls, wishing to obtain further tokens from the same string, should pass a null pointer instead."
Really?? a null pointer.. This is valid code:
char str[] = "C is fucking weird, ok? I said it, sue me.";
char *result = strtok(str, ",");
char *res = strtok(NULL, ",");
You have to understand the context, and the time period. Memory and CPU cycles were precious. All computers being 24/7 networked wasn't a thing, so security wasn't much of a concern. API design tended to reflect that.
Not mentioned in my initial comment, but yeah, I'm viscerally aware of the affect the time period and resources at the time have on API design in C and other languages from that time period.
The null pointer in place of the operand here just seemed like a really good quirk to point out
It's like this because the 1970s C programmer, typically a single individual, is expected to maintain absolute knowledge of the full context of everything at all times. So these functions (the old non-re-entrant C functions) just assume you - that solo programmer - will definitely know which string you're currently tokenising and would never have say, a sub-routine which also needs to tokenize strings.
All of this is designed before C11, which means that hilariously it's actually always Undefined Behaviour to write multi-threaded code in C. There's no memory ordering rules yet in the language, and if you write a data race (how could you not in multi-threaded code) then the Sequentially Consistent if Data Race Free proof, SC/DRF does not apply and in C all bets are off if you lose Sequential Consistency† So in this world that's enough, absolute mastery and a single individual keeping track of everything. Does it work? Not very well but hey, it was cheap.
† This is common and you should assume you're fucked in any concurrent language which doesn't say otherwise. In safe Rust you can't write a data race so you're always SC, in Java losing SC is actually guaranteed safe (you probably no longer understand your program, but it does have a meaning and you could reason about it) but in many languages which say nothing it's game over because it was game over in C and they can't do better.
Wow... A speculative branch prediction path actually get's preemptively executed despite the branch outcome? No matter if the execution has side-affects??? That's quite amazing. Are modern CPUs doing speculative execution like this and just put extra safeguards around affects or do they just prefetch / decode instructions now-a-days?
Author here: This is not a common problem. I think I was told that Alpha had basically the same bug but it is a bug, for sure. Speculative execution causing problematic side effects is a deal killer.
Speculative execution, however, can cause less problematic side effects. For instance, a speculatively executed load or prefetch will usually actually prefetch which will pollute the cache, TLB, etc., and reveal side-band information, but that is a performance problem and perhaps a subtle security flaw, not a correctness bug like this was.
I remember reading this many years ago, it was great.
In the last week Raymond Chen on his The Old New Thing mentioned the concept of delay slots on some CPUs.
It sounds like a similar thing, just formalized so it’s not a bug. Knowing that the instruction after a branch always executes, regardless of if the branch is taken.
I think that is correct, but yes, a delay-slot instruction that is always executed is easy to reason about. Speculative execution that gets real? shudder
Two things, does anyone else feel like 2017 was not 9 years ago and rather feels like it was just yesterday? I use a 2017 iMac running MacOS 13.7.8. It appears my hardware will not support any newer version of MacOS. For the most part, I haven't been too discouraged by this as I prefer older MacOS designs over the newer ones.
However, this is the second time in 2 days I've actually hit a wall in the Apple eco-system due to an older OS.
Last night I tried to build Ghostty to hack on a feature... it needs Xcode SDK 26 which isn't supported on Xcode 14 (latest version I'm able to install).
Now today, attempting to try this app out, I can't launch it due to being on too old of an OS.
It's really a shame because this iMac from 2017 is quite the capable machine. Absolutely no reason to upgrade it (from a hardware / performance standpoint).
The absolute newest Mac in my home is a 2017 and is limited to 13.7.8, also. It's still a beast, and I've never really thought of it as "old." The macOS (and iOS) ecosystem, though, is brutal on us "slightly older" hardware owners. We get dropped so quickly, by both Apple and by 3rd party developers.
Windows developers would think nothing of keeping their applications running on Windows 7 (16 years old) or Windows 10 (11 years old), but my 9 year old Mac is somehow ancient.
Subtle bugs always find their way in increasing amounts for Windows applications that continue getting software releases; we tend not notice because we all run actually supported versions most of the time, and even when we dont- its only for a year.
I see people on youtube trying to make “modern desktop” experiences on Windows 7 and 8; and it takes some serious doing with all the incompatibility with things like browsers. Dialogues about missing features crashing you to desktop more often than working.
So much so that there are dedicated forks of chrome and firefox to support this purpose.
not the design per se (however you are right that theres a lot of swiftui usage here that is only available on newer macos’) but mainly because it is using the new @Observable observation macro that is only available on macOS 14+
Who knows if this is the reason that Trump is putting in place this trade war and increasing tariffs, but this paper is a very interesting perspective on why someone in the position the US is in, in regards to the global economy is extremely thought provoking [1].
Seeing as how Trump appointed the author into his political circle though could be evidence this is the ultimate goal.
The paper is quite lengthy, however, in the beginning Stephen explains this idea of the Triffin Dilemma. A country that acts as the worlds reserve currency and thus creates enormous demand for their currency for things outside of goods are at a disadvantage that exasperate their trade deficit. This is implicit for a countries currency where most global trade is settled in their dollars, not to mention the benefits of holding the world reserve currency as a value store or investment.
I've wondered since the tarifs were announced how much impact they can actually have, but besides that point what is a reserve currency country to do? Give up their reserve currency status? There are significant downsides to that as well...
Well isn't that the whole point? At a fundamental level, investment profits are a payment for the risk you take. No risk equals no profit. There are "safe" investments currently. You can get paid 4% a year roughly to hold treasuries right now. Considered a "risk free" investment (Which sure, maybe the merits can be argued).
But at the end of the day the only way to profit from an investment is taking some risk. It all comes down to pricing that risk.
Does this mean it's impossible in Zig to do strictly Stack related recursion and just by the mere inclusion of a recursive function your implicitly getting heap allocations alongside?
You can put a big buffer on the stack, and use this buffer to break your cycles. At some point you'll run out of this buffer and be forced to handle failure, rather than triggering a stack overflow segfault.
So it will be the same thing but with more (error handling) steps.
This annoyance can be avoided by avoiding recursion. Where recursion is useful, it can be done, you just have to handle failure properly, and then you'll have safety against stack overflow.
Wait, so how do I write mutually recursive functions, say for a parser? Do I have to manually do the recursion myself, and stick everything in one big uber-function?
What is the value proposition for these form libraries? Is it scale? Is it the custom builder? How complex are people's HTML forms these days from a UX perspective?
I was browsing the code, and noticed this forms library was using Supabase, presumably a paid service if this OSS library takes off. I just can't seem to grasp why a custom form building library needs a 3rd party, managed Database included. Scale maybe?
These are genuine questions as I'm woefully unaware of the state of HTML forms / Frontend in 2025
There's a few reasons. The biggest one, IMO, is that it lets non-technical users change things quickly without having to go through the engineering team. Obviously there are limits to that, but in many cases, a product or marketing team wants to modify a form or test a few variations without having to put it into a backlog, wait for engineers to size it, wait for an upcoming sprint, then wait another two weeks for it to get completed and deployed. (Even in more nimble organizations, cutting out the handoff to engineering saves time, eliminates communication issues, and frees up the engineering team to do more valuable work.)
On the technical side, these form builders can actually save a decent amount of development effort. Sure, it's easy to build a basic HTML form, but once you start factoring in things like validation, animations, transitions, conditional routing, error handling, localization, accessibility, and tricky UI like date pickers and fancy dropdowns, making a really polished form is actually a lot of work. You either have to cobble together a bunch of third-party libraries and try to make them play nicely together, or you end up building your own reusable, extensible, modular form library.
It's one of those projects that sounds simple, but scope creep is almost inevitable. Instead of spending your time building things that actually make money, you're spending time on your form library because suddenly you have to show different questions on the next screen based on previous responses. Or you have to handle right-to-left languages like Arabic, and it's not working in Safari on iOS. Or your predecessor failed to do any due diligence before deciding to use a datepicker widget that was maintained by some random guy at a web agency in the Midwest that went out of business five years ago, and now you have to fork it because there's a bug that's impacting your company's biggest client.
Or, instead of all that, you could just pay Typeform a fraction of the salary for one engineer and never have to think about those things ever again.
From strtok man page... "The first time that strtok() is called, str should be specified; subsequent calls, wishing to obtain further tokens from the same string, should pass a null pointer instead."
Really?? a null pointer.. This is valid code:
Why is that ok?reply