The Tower of London arguably qualifies as a fort built to protect its inhabitants from the city. In its original form, its most impressive and formidable defenses faced London.
Cool article but I think the write-up no longer matches the actual code. Snippets in the article use `*p->p` a lot. The *p is a parser struct defined above as
So there's the missing `p`, even though it's no longer an int. So I presume the member variable was once known as `pos` but got renamed at some point. Some of the snippets did not get updated to match.
The numbers in the headline seem odd. They imply that each (fake|fraudulent) worker only nets $5000 per year for Kim. I know the system has some inefficiencies where people behind the scenes are helping the "employee" with the work and there are cost of living expenses, taxes etc. but that seems like a pretty low take.
This might include people working in lumber camps in places like Siberia, "mercenaries" in Ukraine, people in NK-managed restaurants in China, Laos etc, or similar efforts that have been reported on, where the average revenue per worker is likely a lot lower.
I had the same thought - I guess there's additional overhead in paying the in-country proxy and probably also a lot of churn (being found out and fired, and then taking a long time to find another position).
And the reason they were modeled after the dollar bill size is because there were already many types of systems for storing and organizing them. That came in handy for the census.
The old BBC Connections series has a segment with James Burke using the old census tabulators.
Of course since the old syntax is merely deprecated and not removed, going forward you now have to know the old, bad form and the new, good form in order to read code. Backwards compatibility is a strength but also a one-way complexity ratchet.
Regular variadic arguments in general aren't used very often in C++ with exception of printf like functions. Not rare enough for majority of C++ programmers to not know about them, but definitely much more rare than their use in python. Main reason people know about it at all is printf.
The "new" C compatible form has been supported since the first ISO standardized version of c++ if not longer. There haven't been a good reason to use the "old" form for a very long time. Which means that the amount of C++ code using deprecated form is very low.
Being deprecated means that most compilers and linters will likely add a warning/code fix suggestion. So any maintained project which was accidentally using C incompatible form will quickly fix it. No good reason not to.
As for the projects which for some reason are targeting ancient pre ISO standard c++ version they wouldn't have upgraded to newer standard anyway. So if new standard removed old form completely it wouldn't have helped with those projects.
So no you don't need to know the old form to read C++ code. And in the very unlikely case you encounter it, the way for accessing variadic arguments is the same for both forms through special va_list/va_arg calls. So if you only know the "new" form you should have a pretty good idea of whats going on there. You might lookup in references what's the deal with missing coma, but other than that it shouldn't be a major problem for reading code. This is hardly going to be the biggest obstacle when dealing with code bases that old.
The “new” form has been valid since the original 1998 C++ standard, where it was added for compatibility with C. “You now have to know” has therefore already been the case for the past 27 years. Back then the old pre-standard form was kept for backwards compatibility, and is only now being deprecated.
The old-style variadics are rarely seen in C++ these days, never mind this particular edge case. If you working in a vaguely modern version of C++ this largely won’t impact you. You can almost certainly ignore this and you’ll be fine.
Unless you have a massive legacy code base that is never updated, C++ has become much simpler over time. At a lot companies we made a point of slowly re-factoring old code to a more recent C++ standard (often a couple versions behind the bleeding edge) and it always made the code base smaller, safer, and more maintainable. It wasn’t much work to do this either.
If two template spellings trip you up, C++ is not your biggest problem. The joke is that each 'cleanup' sands off a tiny rough edge while the commitee leaves the old strata in place, so the language keeps accreting aliases and exceptions instead of dropping dead weight.
Several times now C++ enthusiasts and indeed the committee have been told the way forward is the "Subset of a superset" that is, Step 1. Add a few new things to C++ and then Step 2. Remove old things to make the smaller, better language they want.
Once they've obtained permission to do Step 1 they can add whatever they want, and in a few years for them it's time to repeat "Subset of a superset" again and get permission for Step 1 again. There is no Step 2, it's embarrassing that this keeps working.
I think Rust has shown a way to remove deprecated interfaces while retaining back compat - automated tooling to migrate to the next version and give a few versions for a deprecated interfaces to stick around at the source level.
If you're talking about editions, this isn't how they work at all; every edition continues to be supported forever. (The part about automated migration tooling is true, and nice.)
There've been a few cases where code was unsound and should never have compiled, but did due to compiler bugs, and then they fixed the bugs and the code stopped compiling. These were handled through deprecation warnings with timelines at least several months long (Rust releases a new version every six weeks), but usually didn't have automated migration tooling, and didn't fracture the language mostly because they were rare edge cases that most programmers didn't encounter.
Editions are still allowed to remove old syntax or even remove APIs - they only can’t break ABIs. So the code is still there once removed from an edition in previous editions, but such symbols don’t even get linked if they’re unused supporting progressive removal. And similarly, I could see editions getting completely removed in the future at some point. Eg rather than indefinitely maintaining editions, in 20 years have a checkpoint version of a compiler that supports the previous 20 years of editions and going forward editions older than 10 aren’t in the build (for example, assuming a meaningful maintenance burden, which is hard to predict when that happens and what a specific policy looks like).
Have not yet. There’s nothing stopping them though and from talking with the std folks it seems like they will likely at some point experiment crossing that bridge.
C++ almost never removes features because of the ABI compatibility guarantees. Programs compiled with older versions of the standard can be linked against newer versions.
This is allegedly because in the 80s companies would write software, fire the programmers, and throw the source code away once it compiled.
Fixing syntax by definition does not affect the ABI. And Rust has shown that both ABI and API compatibility can be achieved in the presence of several "versions" (editions) of the language in the same build.
Probably because like 95% of C++'s issues are self-inflicted and don't need to be addressed if you use a different language in the first place, and 1% of them are fundamentally unsolvable by any language.
Do you actually know Rust or were you just talking out if hour ass? I’d like you to enumerate even thirty problems of C or C++ that Rust doesn’t fix, never mind hundreds (because Rust fixes a metric shit ton of C/C++ problems!)
lol. A functions module system that’s easy to use and adopted? A package manager? A well implemented hash table? Fast compile times? Effectively no segfaults? Effectively no memory leaks? Comparatively no race condition bugs? A benchmark and unit test framework baked into the language? Auto optimization of the layout of structs? No UB?
I don’t know what you’re counting as “3% of the issues” but if those are the 3%, they sound like massive productivity and safety wins that’s not existed in a language with a similar performance profile to C/C++.
Different (though related) things make compiling Rust slow. In both cases the compiler can spend a lot of time working on types which you, as programmer, weren't really thinking about. Rust cares about types which could exist based on what you wrote but which you never made, whereas C++ doesn't care about that, but it does need to do a lot of "from scratch" work for parametrised types that Rust doesn't have to because C++ basically does a lot of textual substitution in template expansion rather than "really" having parametrised typing.
If you're comparing Clang the backend optimiser work is identical in both cases it's LLVM.
People who've never measured often believe Rust's borrowck needs a lot of compiler effort but actual measurements don't agree - it's not free but it's very cheap (in terms of proportion of compiler runtime).
For most day to day cases, rust will actually compile faster because the build system will do good incremental builds - not perfect, but better than c++. Also clean builds are still “perfectly” parallelized by default.
And yes, while rust has a reputation for being slow, in my experience it’s faster for most projects you encounter in practice because the c++ ecosystem is generally not parallelized and even if it is many projects have poor header hygiene that makes things slow.
Having multiple compiler vendors is a problem IMO not a feature. It fragments the ecosystem - the code compiles fine with this compiler but not this other one. The maintenance of portable Rust code is significantly easier.
I think the way forward is multiple backends (LLVM + GCC) to improve platform support, but a single unified frontend that works correctly on all platforms is a good thing.
There is a single standard committee though. There is really nothing stopping them from shipping tooling that can do the conversions for people. The number of vendors isn't really the problem here. The problem is that the committee shifts that responsibility onto the vendors of the compiler rather than owning it themselves.
Can someone explain why helium is used for these purposes, as opposed to some other noble gas? I think there's more argon (it's about 1% of the atmosphere) than helium so is helium somehow special, or is it just cheaper, despite being rarer and non-renewable?
Helium has the second highest [1] specific heat capacity (after hydrogen); it's significantly higher than that of even water. It's damn efficient at cooling or heating. With that, it's chemically inert, unlike hydrogen or ammonia. There's no reasonable substitute.
Heat capacity is irrelevant -- argon and helium have exactly the same heat capacity per liter of gas, which would be the figure of merit in this context.
Heat conductivity, on the other hand, is an order of magnitude higher for helium, compared to argon, because its atoms are moving faster due to their lower mass.
When the gas is used for cooling, heat conductivity is important because it determines the conductivity through the boundary layer near surface, where the velocity of the flow drops to zero at the surface itself, and all the heat transport is through conduction rather than advection.
Helium has 150mW/mK vs Argon ~18mW/mK so you can't replace it.
The only alternatives to Helium are Neon, which is 3x worse and much more expensive, and hydrogen. However, hydrogen is flammable so it's a very bad idea to use it in a fab which has extremely poisonous gases and needs a cleanroom environment. A fire would ruin your whole factory and kill your engineers.
Cool, but I don't see how it's sorting anything. It just seems to play a randomized arrangement of the slices. You can re-randomize as much as you like but there's no sort option as far as I can see.
It randomizes slices of the sample and begins to play the slices in the random order. Meanwhile it begins the bubble sort algorithm at a pace that matches the tempo, sorting the slices into their chronological order. Throughout, it only plays the unsorted slices. (I was kinda hoping it would play the sorted sample at the end.)
I actually wanted it to play them as it went, so that it would be <unsorted><sorted> each time through, with the former shrinking and the latter growing.
The idea is that it slices the Amen Break into however many slices you specify, and the list being sorted is the indices for those slices. At each step, it plays the slice the pivot is being compared to.
Because it only plays the samples being compared, it never plays the sorted chunks, so it's missing a "punchline" of sorts.
You're right. It doesn't play the sorted parts, which is strange. I expected to have a series of random-then-controlled slices with the random part getting shorter and the controlled part getting longer, but it really is just a shortening loop of random beats.
Did you play it to the end? It's absolutely sorting from smallest to largest. Unless you have a confused understanding of a bubble sort, it's doing a bubble sort
The value that is being sorted isn't obvious to me. It's obvious that it is sorting it. I'm guessing maybe some dB level of each of the hits/notes. If that was the case, I'd expect the initial unsorted view to line up with the pattern of the waveforms which is not the case. Maybe it's just an unsorted list of values sorted in sync to the rhythm. It's weird though that the segment corresponds to a segment of the audio. I just don't see how they are linked.
It's sorting by index of the slice. Pressing "shuffle" jumbles the slices up. So it puts the slices of the break back in the correct order. You never hear the result.
Set it to 8 slices and it becomes easy to see what it's doing: look at the waveform and the now-playing highlight jumping around.
It uses the built-in one. But as discussed in the article they ran into the problem where even when you try to force using the internal mic, iOS will silently switch to the mic on a pair of AirPods if there's a pair connected.
I don't think it would work because the accelerometer updates are at too low a frequency. Apple's developer info says:
```
Before you start the delivery of accelerometer updates, specify an update frequency by assigning a value to the accelerometerUpdateInterval property. The maximum frequency at which you can request updates is hardware-dependent but is usually at least 100 Hz.
```
100Hz is way too slow. Presumably some devices go higher but according to the article the peak signal is in the 3kHz to 15kHz range.
Unfortunately, it's not just elected officials that are problematic for prediction markets. The Secretary of War, for instance is not an elected official nor are leaders of the armed forces and there is definitely a prediction market for war. Multiply this by every powerful appointee and every career bureaucrat and see what kind of picture that paints.
And the transparency must be real-time and MUST include the full dox on beneficial owner of the contract/bet, with steep jailtime for falsification/fronting, etc.. They can even say it is for tax purposes — they win that bet, they should pay income tax (and be able to deduct the costs of their losing bets against that specific income type).
I want to know if a bunch of senators or DOD personnel bet on event X, and I want journalists and OSINT watchers to know it in realtime. That gives everyone information while naturally eliminating most of the advantage of insider trading, since nearly everyone will pile into the same trade and the odds/payoff will come closer to the reality.
Knowing who is making the bets doesn’t prevent mildly corrupt officials from driving the outcome that’s going to win them some cash.
Knowing that high-level DOD official were betting on us invading Iran does us no good if the only reason we invaded Iran was so they could win their long-odds bet. Sure, we can try and shame them, but now they're rich and we're fighting another middle-east war.
What is the current method that exists which stops CEO/executives from short selling their own company's stock, then driving that company's value down (which is easy to accomplish)?
Why can't that same method be used to prevent or indict gov't insiders who tries to do the same?
That same method is the SEC (Securities Exchange Commission) and is it widely regarded as simultaneously ineffective and heavy-handedly overreaching.
It is an inherently hard problem to identify insider trading when trading securities, or in this case, bets/contracts, doesn't have participant identification and transparency problems
The same solution would be best for both — everyone can trade freely with the sole caveat that all ultimately beneficial owners are fully identified and the trades are transparently published in real-time.
Braying about "free market" when in the actual market players can hide their identities and covertly manipulate it, while having an underfunded agency supposedly tracking them down after the fact, is just a farce.
A solution structured so it naturally and dynamically self-corrects is far better than an enforcement bolted-on after the fact. And yes, there would still be enforcement of requiring transparency to enforce proper identification.
> Knowing that high-level DOD official were betting on us invading Iran does us no good if the only reason we invaded Iran was so they could win their long-odds bet.
Of course it does, if we’re willing to do ever-so-slightly more than jerk off on TikTok about it.
well, if they weren't doing something that would've otherwise been deemed illegal, then why would they consider it self-incriminating to have to follow KYC/AML rules?
You sound like a helpful world citizen. The other problem the US has is that it is illegal for a US Citizen to pay a bribe but there's no realistic enforcement. Luckily you can help solve this. Whenever and wherever you travel you can keep
some forms with you and every time you are pulled over you can fill them out with the police in quadruplicate so that each of you can mail them to Washington. At some later point the US can try to cross reference and determine who didn't mail theirs and then whether anyone was actually under US jurisdiction during the incident.
The right to a fair trial fundamentally requires the government to do 100% of the job of proving you guilty, and it shouldn’t force you to generate evidence against yourself while going about perfectly legal things
There’s no one in the world that creating an alternate “real” identity would be easier for than someone who can influence or even determine military or covert actions. It’s probably even legal for them to do so.
I’m not sure I agree - look at the crypto crime that happens as hackers breach databases and are able to link crypto holdings with human identities to target them.
Public trades will make people targets and that will create weird incentives.
I doubt that would change anything, in this era of shamelessness. Being corrupt has become a virtue, and winning lots of money is cool, doesn't matter if it was from gambling on war with insider trading.
I believe prediction markets and sports betting apps are societal failures that only serve to alienate the masses and further extract wealth to the top. Let's ban them all, we're not losing anything.
The bill also prevents senior government officials from betting on prediction markets if they are participating personally in the event on which they are betting.
It's also low-ranking members of the armed forces that have a lot more information than you'd expect. If you just banned the high ranking members from prediction markets, I actually don't think very much would change. (There would just be slightly more delay.)
Very few armed forces members have enough information to make actual meaningful trades. The low ranking ones do not have enough money to make meaningful trades even if they did.
That's always been a just-so story invented to justify insider trading. If weather predictors always bet on a weather prediction market, why would anyone else? They'd be guaranteed to lose money.
The meteorologist making the observation has the ability to sway the outcome. "Those automated observations were wrong, turns out the maximum temperature today was 61F not 60F." This already happens all the time. Whether insiders are doing this to win in prediction markets is up for debate.
They do what they can. It won’t matter. But the point at this point is to call out the grift. To make plain for at least the rest of you outside our borders that America died of grifters squeezing dollars out of some weird machine that we couldn’t or wouldn’t shut off.
reply