Hacker Timesnew | past | comments | ask | show | jobs | submit | amluto's commentslogin

> Starlink seems to just print money.

Does it? Those satellites are individually dirt cheap compared to historical communication satellites, but Starlink requires a whole lot of them and they depreciate outrageously quickly.

Compare to my personal favorite communication medium, single-mode-fiber. SMF from 20-30 years ago still works, is compatible with most current-generation wavelengths, and can carry extremely high bandwidth per strand if users are willing to put fancy optics and muxes at the ends or can carry lower speeds at transceiver prices that would have been almost unimaginably low 20 years ago.

Starlink satellites seem to have zero or even slightly negative value after five years.


Getting fiber to a house is relatively expensive, especially houses in more rural areas which is Starlink's main market. A Starlink satellite costs a lot more but can serve many customers.

Let's say a Starlink satellite costs $2 million all-in. (They launch about 25 at a time, the launch costs something like $25 million, add in another million for the satellite itself and operations.) They have about 10,000 satellites in orbit currently, and about 10 million customers. That's about 1,000 customers per satellite, so a five-year cost of $2,000 per customer. That's a fair bit less than it costs to run fiber to a rural house. And Starlink is pretty much a monopoly in their main markets (terrestrial telecoms is usually at least a duopoly) so they can charge more. I pay $85/month for symmetric gigabit fiber. Starlink charges $80/month for 200Mbps, or $120/month for "max." On top of that, they can charge enormous amounts for commercial users like airliners and cruise ships.

According to https://www.reuters.com/business/finance/spacex-generated-ab..., Starlink revenue last year was north of $8 billion. They'd need to launch 2,000 satellites per year to maintain the current fleet. If $2 million is an accurate price tag for them, then that's $4 billion/year. Pretty nice profit, and there's a lot of room for growth.


This seems generally correct, but there are some things to note.

Once fiber is installed, it’s not particularly expensive to maintain, indefinitely. That $2k/customer needs to be paid again every five years, whereas for fiber it’s much closer to being a one time cost. (To be fair, fiber still depreciates and gets damaged.) And fiber is not that expensive to install: Starlink clearly wins for truly rural areas, but for merely low-density suburban areas it’s not nearly so clear.

Starlink’s performance is not awesome compared to high quality DOCSIS fiber deployments, so they will struggle in areas that are well served by the latter, which covers quite a lot of the population by ability to pay, at least in developed markets. So there’s a limited total addressable market issue.

Of course, Starlink may have other valuable applications, especially military.


I don't think they're competitive anywhere a halfway decent terrestrial option is available. But there are enough places where those aren't available to get them 10 million customers and growing, which is enough.

If I were them, my big concern would be getting overtaken by the buildout of cellular connectivity. A good 5G connection could be competitive. But if their direct-to-cell stuff works out, we might see the opposite: rural cellular infrastructure stops being built out or even diminishes because it's cheaper to provide coverage by satellite.


One thing I find rather amazing about all of this is the degree to which the Bitcoin community has tried, for years, to claim that quantum computers will be another other than a complete break.

Sure, it takes a pretty nice quantum computer or a pretty good algorithm or a degree of malice on the part of miners to break pay-to-script-hash if your wallet has the right properties, but that seems like a pretty weak excuse for the fact that the entire scheme is broken, completely, by QC.

Does there even exist a credible post-quantum proof protocol that could be used to “rescue” P2SH wallets?


The best proposal I have heard for rescuing P2SH wallets after cryptographically relevant quantum computers exist is to require vulnerable wallets to precommit to transactions a day ahead of time. The precommitment doesn't reveal the public key. When the public key must be exposed as part of the actual transaction, an attacker cannot redirect the transaction for at least one day because they don't have a valid precommitment to point to yet.

That’s kind of adorable. Would you need to pay to record a commitment? If so, how? If not, what stops someone from DoSing the whole scheme?

I don't think you're understanding how cryptography works. A commitment is basically a hash that is both binding and hiding. In this example it's probably easiest to think of it as a hash. So you hash your post-quantum public key (something like falcon-512) and then sign that hash with your actual bitcoin private key (ecdsa, discrete-log, not quantum safe) and then publish that message to the bitcoin network. Then quantum happens at some point and bitcoin needs to migrate but where do funds go? Well you reveal the post-quantum public key and then you can prove that funds from the ecdsa key should go there. From a technical perspective, this is a complete and fool proof system. DoSing isn't really a concern if you publish to the actual bitcoin network and it's impossible for someone to use up the key space (2^108 combinations at least).

The reason this is a dumb idea is because coordination and timing. When does the cutover happen? Who decides which transactions no longer count as they were "broken" b/c of quantum computing? The idea is broken but not from technical fundamentals.


The DoS attack in this scenario is someone just submitting reasonable-looking but ultimately bad precommitments as fast as possible. The intuition is that precommitments must be hard to validate because, if there was an easy validation mechanism, you would have just used that mechanism as the transaction mechanism. And so all these junk random precommitments look potentially legitimate and end up being stored for later verification. So all you have to do to take down the system is fill up the available storage with junk, which (given the size of bot networks and the cost of storing something for a day) seems very doable.

If the question is storage, bitcoin itself provides a perfectly good mechanism. idk the exact costs but it'd be in the range of ~$0.45 to store a commitment. That's cheap enough to enable good users with small numbers of keys but also expensive enough to prevent spam. It's kind of the whole point of blockchains.

As for verification being expensive, it sounds like you don't know the actual costs. It's basically a hash. Finding the pre-image of a hash is very expensive to the point of being impossible. Verifying a pre-image + hash function = a hash is extremely cheap. That's the whole point of 1-way functions. Bitcoin itself is at ~1000 EH/s (exahashes per second)

Again, this isn't a technical problem. It's a coordination problem.


Commitment validation would indeed be trivial.

This whole scheme fails if an attacker can manage to delay a transaction for a day, and if the commitment also commits to a particular transaction fee, then the user trying to rescue their funds can’t easily offer a larger transaction fee if their transaction is delayed. But if the commitment does not commit to a transaction fee, then an attacker could force the transaction fee to increase arbitrarily.

Maybe the right strategy would be commit, separately, to multiple different choices of transaction fee.


Yes, that would be a concern. You could require a proof of work to submit a precommitment, so that DoSing was at least expensive to do. You could have some sort of deposit mechanism, where a precommitment would lock down 0.1 bitcoins (from a quantum-secure wallet) until the precommitment was used. I admit I'm glad I don't have to figure out those details.

24-hour latency to make a payment? What is this, the 20th century?

This is for rescue, not for payment. Once you've moved the coins to quantum-secure wallet, the delay would no longer be needed.

...probably some people would be very inconvenienced by this. But not as inconvenienced as having the coins stolen or declared forever inaccessible.


> ...probably some people would be very inconvenienced by this. But not as inconvenienced as having the coins stolen or declared forever inaccessible.

I don't know why anyone f's around with crypto anymore. So many caveats, such a scammy ecosystem. It just doesn't seem worth the trouble to support a ransomware and money laundering tool.


> the Bitcoin community has tried, for years, to claim that quantum computers will be another other than a complete break.

Who specifically is claiming this? Satoshi literally mentioned the need to upgrade if QC is viable on bitcointalk in 2010.


Call me crazy, but I think if bitcoin is ever broken they're more likely to move to a centralized ledger than a more secure decentralized ledger. Roughly nobody invested in bitcoin cares about the original mission, they just care about their asset prices.

And the asset prices (at least partially) depend on true believers in the mission.

On the brightside at least we'll have a clear indicator for when quantum computers actually arrive.

The problem here is the word "will". Because they don't exist.

If Bitcoin is broken then your bank encryption and everything else is broken also.

As far as I know quantum computers still can't even honestly factor 7x3=21, so you are good. And the 5x3=15 is iffy about how honest that was either.

https://hackertimes.com/item?id=45082587

Bitcoin uses 256-bit encryption, it's a universe away from 5x3=15.


You are assuming that progress on factoring will be smooth, but this is unlikely to be true. The scaling challenges of quantum computers are very front-loaded. I know this sounds crazy, but there is a sense in which the step from 15 to 21 is larger than the step from 21 to 1522605027922533360535618378132637429718068114961380688657908494580122963258952897654000350692006139 (the RSA100 challenge number).

Consider the neutral atom proposal from TFA. They say they need tens of thousands of qubits to attack 256 bit keys. Existing machines have demonstrated six thousand atom qubits [1]. Since the size is ~halfway there, why haven't the existing machines broken 128 bit keys yet? Basically: because they need to improve gate fidelity and do system integration to combine together various pieces that have so far only been demonstrated separately and solve some other problems. These dense block codes have minimum sizes and minimum qubit qualities you must satisfy in order for the code to function. In that kind of situation, gradual improvement can take you surprisingly suddenly from "the dense code isn't working yet so I can't factor 21" to "the dense code is working great now, so I can factor RSA100". Probably things won't play out quite like that... but if your job is to be prepared for quantum attacks then you really need to worry about those kinds of scenarios.

[1]: https://www.nature.com/articles/s41586-025-09641-4


1) yes, everything is affected, but everything else is being migrated to PQC as we speak

2) "256-bit encryption" has different meanings in different contexts. "256-bit security" generally refers to cryptosystem for which an attack takes roughly 2^256 operations. this is true for AES-256 (symmetric encryption) assuming classical adversaries. this is not true for elliptic curve-based algorithms even though the standard curves are "256-bit curves", but that refers to the size of the group and consequently to the size of the private key. the best general attacks use Pollard's rho algorithm which takes roughly 2^128 operations, i.e., 256-bit curves have 128-bit security.

in the context of quantum attackers, AES-256 is still fine although theoretically QCs halve the security; however its not that big of a deal in practice and ultimately AES-128 is still fine, because doing 2^64 "quantum operations" is presumed to be difficult to do in practice due to parallelization issues etc.

the elliptic curve signatures (used in Bitcoin) are attacked using Shor's algorithm where the big deal is that it is asymptotically polynomial (about O(n^3)) meaning that factoring a 256-bit number is only 256^3/4^3 = 262144x more difficult compared to factoring 15. this is a big difference from "standard" exponential complexity where the difficulty increases exponentially by factors of 2^n. (+ lets ignore that elliptic curve signatures dont rely on factoring but the problem is essentially the same because Shor does both because those are hidden subgroup problems)

the analysis is more complex but most of it is essentially in that paper and explains it nicely.


If your bank’s encryption is broken in the future, then, to recover, you will need to change your password, and that’s all. Bitcoin does not have that luxury.

Also, your bank can switch to securing TLS with post-quantum key exchange algorithms with little difficulty and with no particular scalability or re-architecting challenges.

As for “256-bit”, the best known quantum attack against symmetric ciphers is Grover’s algorithm, and Grover’s algorithm will never break a targeted 256-bit symmetric key in the lifetime of the universe even if run by a hypothetical alien civilization with a Dyson sphere. (It might plausibly break one of many targeted keys in a multi-key attack run by advanced aliens, but this won’t steal your money and it could be easily mitigated by moving to 384 or 512 bits.)


Your bank doesn’t depend only on cryptography. It would be still a lot of effort to simply make transfer from a bank account. Quantum computer will not magically give an answer for a password of a hash you don’t have. TLS is moving to post quantum as we speak.

For crypto currency you have all the data you need to break whole system ready in your hands as you will be able to produce private key from public keys of wallets. Cryptocurrency depends only on cryptography.


In Bitcoin's case, public keys are only revealed during a transaction.

And every transaction completely spends the source keypairs' funds.

So the only attack vector a quantum computer could use is:

1. Observing newly broadcast/unconfirmed transactions

2. Deriving the private key(s) from the public key(s)

3. Creating and broadcasting its own transaction using the stolen keypairs before the original transaction confirms (presumably with a higher fee to win the confirmation race).

Please correct me if I'm wrong.

EDIT: correction: every transaction completely spends any selected UTXO of an associated keypair, not all of the "source keypairs' funds". Thus the attack vector also includes being able to steal from any keypair that has ever made a transaction and also has UTXOs.


The newest transaction mechanism (taproot; P2TR) exposes the public key of the receiver as part of the transaction. If it becomes more commonly used, the supply of bitcoins with exposed public keys would start going up again. See figure 5 of https://arxiv.org/pdf/2603.28846#page=14 .

So everything basically.

All serious financial businesses already have a quantum strategy and are actively working on transitioning their cryptography to post-quantum secure algorithms.

Bitcoin doesn't use 256 bit encryption, unless you mean 256-bit hashing. The cryptographic algorithms that are mostly under quantum threat are asymmetric, e.g. digital signatures.


> All serious financial businesses already have a quantum strategy and are actively working on transitioning their cryptography to post-quantum secure algorithms.

That’s hilarious and it’s not even April 1 anymore.

Lots of serious financial businesses still use FTP or use SFTP running some unbelievably bad server implementation on a Windows machine somewhere that uses such outdated cryptography that it doesn’t even interoperate with modern OpenSSH. Operations do not necessarily score highly on the ACID scale. It’s tied together with duct tape and baling wire.

On the other hand, the system works and is really remarkably resilient to various failure modes. You would be hard pressed to cause more than severe annoyance by compromising these crappy old systems.


I work in the sector, and I'm responsible in my own organisation for our quantum strategy. Most, if not all, of the serious players are doing this. This doesn't mean we have replaced all our old crypto; far from it.

NIST has defined a timeline for post quantum readiness to be complete by 2035. Crypto migrations historically take a long time; you can't just replace your own stuff, or upgrade just a server. All the clients that interact have to upgrade as well or it all breaks.


> If Bitcoin is broken then your bank encryption and everything else is broken also.

Its a lot easier for your bank to change encryption methods than it is for bitcoin. Presumably you mean TLS here (where else do banks use encryption? Disk encryption?). People are already deploying experiments with quantum-proof TLS.

> As far as I know quantum computers still can't even honestly factor 7x3=21, so you are good. And the 5x3=15 is iffy about how honest that was either.

This is probably the wrong way to look at it. Once you start multiplying numbers together (for real, using error corrected qubits), you are already like 85% there. Like if this was a marathon, the multiplying thing is like a km from the finish line. By the time you start seeing people there the race would already be mostly over.


I still don’t really get the argument, like okay this extremely rich theoretical attacker can obtain the private key for the cert my service uses, and somehow they’re able to sniff my traffic and could then somehow extract creds. But that doesn’t give them my 2fa which is needed to book each transaction, and as soon as these attacks are in the wild anti fraud/surveillance systems will be in much harder mode.

I don’t see QC coming as meaning bank accounts will be emptied.

disclaimer: I work at a bank on such systems


My bank definitely doesn't require 2FA on every transaction. It only requires it to log in. I guess other people have more security concious banks then me.

Even still, i think there is some benefit to attackers being able to passively monitor connections. Getting the info neccesary to conduct some other type of fraud outside of the system. Lots of frauds live or die on knowing enough about the victim's financial situation.

However it really doesn't matter, when it happens we will just switch to different encryption.


It’s turning into a bit of a grift now. So many crypto agility “consultants “ popping up with their slop graphics. Never mind the fact that even if a relevant quantum computer is built it will still cost the user millions of dollars to break each RSA key pair…

I dont neccesarily think it would cost millions per key pair. Hard to say with the technology so immature, but it seems like the sort of thing with huge upfront costs but low marginal costs. Once you have a QC you dont have to build a new one for the next key pair.

No, it’s completely wrong. It’s a very minor refinement of a terrible yet sadly common design that merely mitigates one specific way that the terrible design can fail.

See my other comment here. By the time you call the OP’s proposed verify API you have already screwed up as a precondition of calling the API.


You’re right, but I think the commenter you’re replying to is also right.

The OP is using unreadable hex strings in a way that obscures what’s actually going on. If you turn those strings into functionally equivalent text, then the signatures are computed over:

    (serialized object, “This is a TreeRoot”)
and the verifier calls the API:

    func Verify(key Key, sig []byte, obj VerifiableObjecter) error
(I assume they meant Object not Objector.)

This API is wrong, full stop. Do not use this design. Sure, it might catch one specific screwup, but it will not catch subtler errors like confusing a TreeRoot that the signer trusts with a TreeRoot that means something else entirely. And it requires canonical encodings, which serves no purpose here. And it forces the verifier to deserialize unverified data, which is a big mistake.

The right solution is to have the sender sign a message, where:

(a) At the time of verification, the message is just bytes, and

(b) The message is structured such that it contains all the information needed to interpret it correctly.

So the message might be a serialization of a union where one element is “I trust this TreeRoot” and another is “I revoke this key”, etc. and the verification API verifies bytes.

If you want to get fancy and make domain separation and forward-and-backward-compatibility easier, then build a mini deserializer into the verifier that deserializes tuples of bytes, or at most UUIDs or similar. So you could sign (UUID indicating protocol v1 message type Foo, serialization of a Foo). And you make that explicit to the caller. And the verifier (a) takes bytes as input and (b) does not even try to parse them into a tuple until after verifying the signature.

P.S. Any protocol that uses the OP’s design must be quite tortured. How exactly is there a sensible protocol where you receive a message, read enough of it to figure out what type (in the protobuf sense) it contains such that there is more than one possible choice, then verify the data of that type? Are they expecting that you have a message containing a oneof and you sign only the oneof instead of the entire message? Why?


> Re: I'd be astonished if 1000s of exit strategies weren't deep planned, maybe a dozen best-outcomes planned, before a single plane bombed anything. The US knows how to exit this.

> Isn't this just wishfull thinking?

The administration could have asked their favorite LLM to plan 1000 exit strategies, kind of like how, if you asked an LLM to make up a reciprocal tariff formula, you would have gotten approximately the administration’s formula.

None of this means that the results are at all useful.


I find the whole field of radiology to be utterly baffling. There are doctors who specialize in, and hopefully understand, specific diseases and/or parts of the body. But we have radiologists who are supposed to be able to look at images, taken by quite a variety of technologies and parameters, of any part of the body, and are expected to accurately interpret the findings, possibly without any relevant context.

In my personal experience interacting with the medical system, it’s, unsurprisingly, quite common for an actual specialist to look at the same images a radiologist looked at, and see something quite different. And it’s nearly always the case that a specialist or a reasonable careful non-specialist who is willing to read a bit of the literature or even ask a chatbot [0], will figure out that at least half of what the radiologist says is utterly irrelevant.

So I think that the degree to which ML can perform as well as a radiologist is not necessarily a great measurement for ML’s ability to assist with medical care.

[0] Carefully. Mindlessly asking a chatbot will give complete nonsense.


Irrelevant to them. A radiologist is on the hook for missing a tiny possible tumor in a scan for a blood clot.

They like to show off occasionally. We had a rectal foreign body that was described as a Phillips-head screwdriver. I was hoping to catch them out by noticing it was Pozidriv, but it was in fact a Phillips.


I'd take it further/slightly parallel direction. Medicine is at the same time a science and a weird "feel and experience" area.

On the one hand it's a science: controlled experiments, calculated dosages, all based on an understanding of low level biology, fancy imaging methods, measuring currents in people's bodies and so on.

On the other hand, there seems to be plenty of "he seems fine to me", "tests came back fine but something seems off to me so let's try another test", "doesn't seem to be responding to this drug, let's try the other one", "in my experience this drug works better than that one". It seems like a pretty big chunk of subjectivity is actually a part of the field.


> On the one hand it's a science: controlled experiments

Those experiments are so hilariously expensive these days, and the results are often not actually fully published, so good data is often unavailable.

> calculated dosages

Often calculated based, in large part, on researchers’ vibes and their vibes when designing experiments.

> all based on an understanding of low level biology

There are many, many drugs with partially or even almost fully unknown mechanisms.


Radiologists work best in consultation with the physicians ordering the studies. Sadly, this is less and less common as workloads increase in medicine. When I started 20 years ago there were whole teams that came through the radiology department every morning to review all of the cases on their patients. Now I go weeks without seeing another physician.

> query file size, allocate buffer once, read it into the buffer, drop some NULL's into strategic positions, maybe shuffle some bytes around for that rare escape case, and you have a whole bunch of C strings, ready to use, and with no length limits.

I have also done this, but I would argue that, even at the time, the design was very poor. A much much better solution would have been wise pointers — pass around the length of the string separately from the pointer, much like string_view or Rust’s &str. Then you could skip the NULL-writing part.

Maybe C strings made sense on even older machines which had severely limited registers —- if you have an accumulator and one resister usable as a pointer, you want to minimize the number of variables involved in a computation.


At least some Thinkpads also have an actual drain: if you spill a not-especially-corrosive liquid on it, you should be okay — the liquid will drain through a channel and out a drain hole on the bottom of the laptop.

On a quick read of the paper, I see two surprising things:

1. If there’s no initializer and various conditions are met, then “the bytes have erroneous values, where each value is determined by the implementation independently of the state of the program.

What does “independently” mean? Are we talking about all zeros? Is the implementation not permitted to use whatever arbitrary value was in memory? Why not?

2. What’s up with [[indeterminate]]? I would expect “indeterminate” to mean that the variable has a value that happens to be arbitrary (and may contain sensitive data, etc), not that it turns back into actual UB.


> What does “independently” mean?

It can pick whatever value it wants and doesn't have to care what the program is doing.

Also the value has to stay the same until it's 'replaced'.

> Are we talking about all zeros?

It might be, but probably won't be. What makes you bring up all zeroes?

> Is the implementation not permitted to use whatever arbitrary value was in memory? Why not?

(Edit: probably wrong, also affects other things I said) It can. What suggests it wouldn't be able to?

> 2. What’s up with [[indeterminate]]? I would expect “indeterminate” to mean that the variable has a value that happens to be arbitrary (and may contain sensitive data, etc), not that it turns back into actual UB.

"has a value that happens to be arbitary" would be the default without [[indeterminate]]. Well, it can also error out if the compiler wants to do that.


> It can. What suggests it wouldn't be able to?

"Whatever value was in memory" would be depending on the (former?) state of the program, wouldn't it?


If that's what they're going for, it's way too much weight to hang on a single vague word like that. Trying to define "state of the program" in a detailed way sounds nightmarish. Let's say I'm the implementation. If I go get fresh (but not zeroed) memory from the OS to put my stack on, the garbage in there isn't state of the program, right? If I then run a function and the function exits, is the garbage now state of the program, or is it outside the state of the program? If I want a fixed init value per address, is that allowed as a hardening feature or disallowed as being based on allocation patterns? Does the as-if rule apply, so I'm fine if the program can't know for sure where I got my arbitrary byte values from?

And would that mean there's still no way to say "Don't waste time initializing it, but don't do any UB shenanigans either. (Basically, pretend it was initialized by a random number generator.)"


> Let's say I'm the implementation. If I go get fresh (but not zeroed) memory from the OS to put my stack on, the garbage in there isn't state of the program, right?

I'd argue that once you get the memory it's now part of the state of your program, which precludes it from being involved in whatever value you end up reading from the variable(s) corresponding to that memory.

> If I want a fixed init value per address, is that allowed as a hardening feature or disallowed as being based on allocation patterns?

I'd guess that that specific implementation would be disallowed, but as I'm an internet nobody I'd take that with an appropriately-sized grain of salt.

> And would that mean there's still no way to say "Don't waste time initializing it, but don't do any UB shenanigans either. (Basically, pretend it was initialized by a random number generator.)"

I feel like you'd need something like LLVM's `freeze` intrinsic for that kind of functionality.


> What does “independently” mean?

It means what it says on the tin. Whatever value ends up being used must not depend on the state of the program.

> Are we talking about all zeros?

All zeros is an option, but the intent is to allow the implementation to pick other values as it sees fit:

> Note that we do not want to mandate that the specific value actually be zero (like P2723R1 does), since we consider it valuable to allow implementations to use different “poison” values in different build modes. Different choices are conceivable here. A fixed value is more predictable, but also prevents useful debugging hints, and poses a greater risk of being deliberately relied upon by programmers.

> Is the implementation not permitted to use whatever arbitrary value was in memory?

No, because the value in such a case can depend on the state of the program.

> Why not?

Doing so would defeat the purpose of the change, which is to turn nasal-demons-on-mistake into something with less dire consequences:

> In other words, it is still an "wrong" to read an uninitialized value, but if you do read it and the implementation does not otherwise stop you, you get some specific value. In general, implementations must exhibit the defined behaviour, at least up until a diagnostic is issued (if ever). There is no risk of running into the consequences associated with undefined behaviour (e.g. executing instructions not reflected in the source code, time-travel optimisations) when executing erroneous behaviour.

> What’s up with [[indeterminate]]?

The idea is to provide a way to opt into the old full-UB behavior if you can't afford the cost of the new behavior.

> I would expect “indeterminate” to mean that the variable has a value that happens to be arbitrary (and may contain sensitive data, etc), not that it turns back into actual UB.

I believe the spelling matches how the term was used in previous standards. For example, from the C++23 standard [0] (italics in original):

> When storage for an object with automatic or dynamic storage duration is obtained, the object has an indeterminate value, and if no initialization is performed for the object, that object retains an indeterminate value until that value is replaced.

[0]: https://open-std.org/JTC1/SC22/WG21/docs/papers/2023/n4950.p...


> Doing so would defeat the purpose of the change, which is to turn nasal-demons-on-mistake into something with less dire consequences

What nasal demons?

UB is permitted to format your disk, execute arbitrary code, etc. But there’s lots of room between deterministic values and UB. For example, taking a value that does depend on the previous state of the program and calling it the “erroneous” value would give a non-UB, won’t format your hard disk solution. And it even makes quite a lot of performance sense: the value that was already in the register or at that address in memory is available for free! The difference from C++23 would be that using that value would merely be erroneous and not UB.

And I think the word “indeterminate” should have been reserved for that sort of behavior.


> What nasal demons?

Those that result from the pre-C++26 behavior where use of an indeterminate value is UB.

> But there’s lots of room between deterministic values and UB.

That's a fair point. I do think I made a mistake in how I represented the authors' decision, as it seems the authors intentionally wanted the predictability of fixed values (italics added):

> Reading an uninitialized value is never intended and a definitive sign that the code is not written correctly and needs to be fixed. At the same time, we do give this code well-defined behaviour, and if the situation has not been diagnosed, we want the program to be stable and predictable. This is what we call erroneous behaviour.

> And I think the word “indeterminate” should have been reserved for that sort of behavior.

Perhaps, but that'd be a departure from how the word has been/is used in the standard so there would probably be some resistance against redefining it.


But they do have constant height in the sense that, unless you resize the window horizontally, the height doesn’t change.

For what it’s worth, modern browsers can render absurdly large plain HTML+CSS documents fairly well except perhaps for a slow initial load as long as the contents are boring enough. Chat messages are pretty boring.

I have a diagnostic webpage that is a few million lines long. I could get fancy and optimize it, but it more or less just works, even on mobile.


Exactly, browsers can render it fast. It's likely a re-rendering issue in React. So the real solution is just preventing the messages from getting rendered too often instead of some sort of virtual paging.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: