Hacker Timesnew | past | comments | ask | show | jobs | submit | jof's commentslogin

SSL was developed by Netscape in the 90s and evolved into TLS. Netscape Navigator essentially evolved into Mozilla.

"They've" been at it from the beginning, so it somehow seems understandable that Mozilla has a lot of "SSL" momentum or carryover.


actually we wrote this many years ago and left mozilla ans nobody is really updating it other than adding new configs. its not super useful anymore :)

at the time it made sense to us because you couldnt have good SSL configuration everywhere (it was not well supported) so we had trade-offs and created tiers of configs. We barely had TLS coming out, so SSL eas still the name of the game.

nowaday just use the latest TLS defaults and you're golden.


It seems to me like the underlying issue was ignoring HTTP semantics and making a state-changing link like a logout link a plain <a> (HTTP GET) and not something like a form submission (HTTP POST).

Having intuition for the foundational layers of our tools saves so much time and future headaches.


Author of the post here,

There was no form submission, I'm not sure where you got that. There was also no POST. Though yes, I agree that in the core HTTP semantic, you wouldn't want to change state on a GET and that should include not calling `Set-Cookie`. And yet the reality is that that nearly every application - and many popular libraries like auth0 - do in fact set and clear cookies on `GET`.

The issue here was that the `Link` component in NextJs

- does preloading by default (which is a bad idea exactly for the above reason of reality being different from theory)

- doesn't do preloading by default when running on the dev server (so you don't see the error until its deployed)

- because it does preloading directly in javascript, it can't possibly follow the HTTP semantic of not actually applying cookies until later when the cached route is used

Everything else was the wild goose chase bits.

Also I asked claude to criticize the article as a web forum might before publishing, and this is definitely the tone it gave :D

Oh, also, I'm pretty sure I got the part wrong where i was talking about the preload attribute in HTML, but so far no one's noticed. I should correct that.


> There was no form submission, I'm not sure where you got that. There was also no POST.

OP was saying the logout function should have been behind a form submission / POST.


Ah, yes, I mean, agree that would have been technically correct, but like I said, its just not how a lot of the web works. auth0-nextjs seems to react to `GET` by default (though it might also work with `POST` and you certainly can override things)


So OP was correct that a proper use of the foundational layer of HTTP would have saved time, yours in particular, right?

Also, I didn’t get your ”Claude predicted your tone smiley” thing. OP tone seemed polite and clear. Your tone, on the other hand, seemed defensive and dismissive. Even after you realizing that you initially misunderstood what OP said, adding a “I mean” and a “but I like I said” to reinforce you were right even while misreading what OP said (rather than just acknowledging you got it wrong in the first reading).

I would go even further and speculate that you were predisposed to get a dismissive tone from a web forum (your previous Claude test suggests that) so much that you got a perfectly fine comment and misread in a way that it felt in the “wrong tone” to you. Even misunderstanding what the post said. All of that to confirm your predisposition.


I think I left my context collapse a little here. The article had gotten really good feedback when I passed it around in the various communities I'm in, but I hadn't written it with the idea of the broader hackersphere in mind. I did post the story to here, but I didn't really think it would get traction. I should have done some double-checking and added caveats and context beforehand.

My comment about Claude was simply intended to giggle at how much it has us pegged, not to call out the op directly.


It's well beyond "technically correct", especially for the web. The "safe methods" section is quite explicit about that:

https://www.rfc-editor.org/rfc/rfc7231#section-4.2.1

https://www.rfc-editor.org/rfc/rfc9110#name-safe-methods

By electing to actionably mutate state on GET, one subscribes themselves to a world of hurt.

It is totally how the web works, both as defined by HTTP and in practice. Surely one can pile a dozen workarounds to circumvent the GET safety definition, but then it's just flat out simpler to have it be a POST or DELETE and work as intended.

That a lot of people are doing it a certain - broken - way certainly does not mean they are right.


That would also have been practically correct, avoiding you this bug and the many hours of debugging, being resilient to byzantine/adversarial technologies (NextJS reimplementing prefetching itself and making debugging very difficult)


"because it does preloading directly in javascript, it can't possibly follow the HTTP semantic of not actually applying cookies until later when the cached route is used"

I may be wrong, but I don't think using JavaScript vs using the standard HTML <link> element to prefetch makes a difference here. I don't see anything in the HTML specs about preload or prefetch delaying cookie setting to sometime after the resource is actually loaded (although admittedly I find this bit of the spec somewhat hard to read, as it's dense with references to other parts of the spec). I tried it out, and, both Firefox and Chrome set the cookies for preloaded and prefetched links when the resource is loaded, even if the resource is never actually used.


> Also I asked claude to criticize the article as a web forum might before publishing, and this is definitely the tone it gave :D

Come on.


This is a very good example where the HTML extensions that alex proposed here:

https://www.youtube.com/watch?v=inRB6ull5WQ

(TLDW: allow buttons to make HTTP requests; allow buttons & forms to issue PUT, PATCH & DELETE; allow buttons, forms & links to target elements in the DOM by id instead of only iframes)

would improve the web platform. You could have a stand-alone logout button that issues a DELETE to /session or whatever. Nice and clean.


I mean, it should just be a submit for a form with a /logout POST action. It’s standard and what web devs have been doing for decades


Yeah, the problem is that it requires a form, which has layout implications w/o styling and POST is not idempotent, whereas a logout operation typically is idempotent. Being able to issue a DELETE to a URL like /session from an element that doesn't have layout implications would be ideal.


A button doesn't have to be inside a form, though. You could have an empty form as a neighbour to the button (or anywhere else inside the page body), and associate the button with it.

  <button form="logout-form" ...>logout</button>
  <form name="logout-form"></form>
No layout implications that way, barring any nth-child css (solvable by putting the form somewhere else). Doesn't solve the form being limited to GET/POST, but styling concerns are atleast handled.


doable but rarely used, inconvenient and awkward, alex proposes allowing buttons to be stand-alone hypermedia controls which also allows multiple buttons located within a form to perform different actions (e.g. save v. cancel)


Oh for sure, standalone elements would definitely be better, I just wanted to point out that there's a way around needing to do silly stuff like <form class=blabla>

Though in my experience, it's great in frameworks like svelte. Define your forms at the top of the component, and you can see at a glance what native actions the component can do, and where it posts to.


> POST is not idempotent

It certainly can be, there’s just no _requirement_ that it is idempotent. There’s no problem with having an idempotent POST operation.

I’m also not a fan of a DELETE /session option, the client shouldn’t have to care about the concept of “session”, it’s the server’s problem. But I’m not a fan of resource-based endpoints in general, I tend to prefer task-based endpoint so /logout makes more sense to me.

Also, even if it was possible to trigger DELETE in html it probably would be implemented as a form. Not really a problem, making a form element inline is trivial, and probably needed in various parts of an app (any button that changes some state)


right, but the browser doesn't know that and so it has to treat the operation as if it were not idempotent (i.e. warn on a resubmit)

You second paragraph indicates that you do not like the REST pattern of the web, which is fine, but i hope you can appreciate that some of us would like the web as the web to make it possible to abide by that pattern

the last point is addressed in Alex's proposal to allow buttons to function as stand-alone hypermedia controls


Genuine question: How do you believe one should learn these semantics? This is more something I've been pondering myself recently, because I agree with you that the foundational knowledge for our work in any tech stack is usually the most important for understanding higher abstractions. But with so much to know it feels impossible to 'know it all' so to speak especially if you wear more than one specialized hat. Then beyond that even if you're only trying to learn just the foundations how do you know what those foundations are if you're not already inundated in that stack?

This is mostly just my personal ramblings, but I'd be curious other peoples viewpoints on this.


I remember many years ago when I used to read print magazines about programming and web development.

One of those magazines told a story about a web site that had lost a lot of data. What had happened? Well, somehow they had this page that

1. Required no authentication at all, and

2. Was using links like

  <a href="/path/to/file?action=delete>Delete file</a>
And so the Google web crawler had come across this page and happily visited each and every one of those links.

That’s when I learned about the importance of using forms with POST requests for certain actions instead of using links that send GET requests.

And then some years later someone told me about this thing called HATEOAS and about RESTful APIs and that actually there are different HTTP verbs you can use other than just GET and POST. Like for example

  DELETE /path/to/file
As for your question about how someone is supposed to learn that these days?

Ideally whatever web development tutorials or courses or books they are using would at some point tell them about the different HTTP verbs that exists, and of how and when to use each of them, and crucially to tell them about bad consequences of using GET for anything that has side-effects like logging out a session or deleting a file.


This can be complex sometimes, but in case of HTTP methods specifically, it's hard to imagine how one can't know about this.

You learn HTML (and see a mention of "POST" method); or read HTTP primer (and see reference to methods) or open browser inspection window and see prominent "Method" column, or see the reference in some other place. You get interested and want to look it up - say wikipedia is often a good start [0] for generic part. And the second sentence of description says it all:

> GET: The GET method requests that the target resource transfer a representation of its state. GET requests should only retrieve data and should have no other effect.

[0] https://en.wikipedia.org/wiki/HTTP#Request_methods


MDN, no matter how highly rated, is still insanely underrated.

https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods

Also the many HTTP RFCs, this one in particular covers semantics:

https://www.rfc-editor.org/rfc/rfc9110.html

As the age old wisdom says... RTFM :P

HTTP is awesome, I'm in love with it. Beautiful piece of work.


IMO it's very understandable to not know about this sort of thing starting out. Everybody was new once, and it's much easier to get motivated to build cool stuff than to read about all the fine details of all of the technologies we're using. I say, go ahead and take the shortcuts and build some cool things in a maybe sloppy way as long as the traffic and stakes aren't too high. Then, once you've got something cool built, take some time every now and then to seek out and read about more of the details of some of the systems and tools you're using.


While it may not be quite the same answer you're looking for, I'd suggest the OWASP, and at least their top 10 for sure. Learning about SSRF may not have stopped this behavior (it's coming from the authenticated browser), but if you're doing CSRF checks you won't get logged out by random links on other peoples sites, and that whatever logged you out was a legitimate action.


Personally, I think it comes from experience and learning. I read the comment and an old HN story popped into my head.

That was where I "learnt" side effects of not using verbs properly. It stuck to me from then.

https://hackertimes.com/item?id=16964907


It requires slowing down. Unheard of.


Exactly. And ditching the "move fast and break things" mindset. Learn your craft and embrace the learning process. Always be curious about how the stuff below your layer works, fundamentally. Recurse on searching for the seminal works that defined those layers.

This seems appropriately relevant today: https://hackertimes.com/item?id=41208627

We (the industry) have built up so many layers upon layers and frameworks designed to make things easier that it just seems to attract newcomers to software engineering with this mindset that all it takes is to start with the sample-app for a high level framework, hack on it with trial and error until it does something they want, and then take to social media with proclamations of "Look! I built a thing! You can hire me to build your thing now!"


A book is always a good start.


The RFCs are often fairly well written and not so hard to digest.


Follow the Ruby on Rails getting started guide and build a toy Rails web app. It has conventional http semantics baked in, you'll learn a lot.


To be fair, <a> tags can't send out non-GET requests. Which yes, can be interpreted as "logout controls should be buttons in forms, not links", but I would really like native htmx-like attributes for intractable html elements.


What’s stopping you from using htmx with a form? Hx-post works as a form element attribute just fine.


Nothing. I just think it logically makes sense as a native feature.


Make an LED blink.

Then, connect an RGB LED and experiment with PWM signal generation.

Then, experiment with network programming, accepting a UDP packet to the ESP32 that sets the color of the RGB LED.


Alternatively, you can go high-level immediately - instead of accepting a UDP packet to set the color, run a webserver on it with functionality to change the RGB LED color that you can access from any browser. Modern microcontrollers have enough resources to just spend them like that.


Are we talking lightweight servers like minihttp and shizaru, mid-level lighttpd , or big-ass Apache and nginx?


We're talking lighter than minihttp and shizaru - instead of a separate process on an OS, you'd use library that allows your firmware to respond to http requests, but that still allows you to run all the relatively complex UI/UX code on the user's computer or phone, with the microcontroller only handling the physical functionality; and reduces the need to have and manage more physical buttons/lights/screens/etc on the device itself; but you can do that without writing a separate app to generate some custom command&control messages - as in the grandparent post example of encoding RGB light control in an UDP packet, it would probably need three times the amount of code both on sending and receiving side compared to a http-based rgb control, which can probably be done in ten or so lines of your own code.


Oh, that's pretty neat. Basically trimming the server down to the routing itself.

    1. accept HTTP
    2. check for valid endpoint
    3. if yes, do thing and exit
    4. if no, 'error'
Client side has fancy UI for essentially templating per-HTTP request or command sent to the device.

I guess the only issue I see is you'd need some sort of firewalling or security. Otherwise any rando could fire HTTP requests at the thing and make it do stuff.

How would you structure this on the device's side? If a webserver's too big, then I imagine an init system is also too big.


You generally don't structure it as having a webserver, you'd structure it as an app (you run a single app on the device, there's no separate OS involved) that can react to HTTP requests - i.e. my mental model is that you don't run a webserver on the device, but instead that the device becomes a webserver.

You can structure the on-device app as 'slaved' to the web requests, where it simply waits for requests in a loop and only does stuff in response to a request - for example, take a measurement from some sensors and send them back with some surrounding HTML.

Authentication/authorization is an issue, but it has all the same issues and solutions as webapps - login+session cookies; or whitelisting IP ranges; or TLS client certificates; etc.


Pretty barebones but working web servers are possible. Example: https://randomnerdtutorials.com/esp32-web-server-arduino-ide...


ugh.. arduino.

Better to start with ESP-IDF, there's a pretty full featured well documented web server, and a lot more.

https://github.com/espressif/esp-idf/tree/master/examples/pr...


Just install ESPHome on it :)


It seems to have gone out of favour for some reason I don’t understand, but subscribing to an MQTT topic from an ESP is easy and performant.


Could it also be, with the advent of Satellite SOS, that Apple is starting to explore non-conventional mobile RF protocols, and that Qualcomm's product offerings are only really geared towards generic, standards-based mobile networks.


Satellite SOS is based on the same bands used by 5g. Also it is now part of 5G NR Release 17. Qualcomm modems since Sanapdragon X60 had support for these bands. Even before Qualcomm officially announced Snapdragon satellite with X70 modem, Huawei was offering satellite communication with Snapdragon X65 in China, which is the same modem found in iPhone 14 series.


The GPS P(Y) and Military codes exist to (hopefully) prevent spoofing.


They aren't. A spoofer doesn't need to know what the signal means/be able to decrypt it. Just retransmit signal received from a different place at higher power. Only way to distinguish it from a real one is timing, but that requires an atomic clock, which is $15,000 and too expensive for most applications.

But, military grade GPS receivers use virtual beam forming to achieve a very high attenuation of spoofing signal so they are extremely hard to spoof, they always get the real signal as stronger.


Yeah I doubt the engineers forgot about replay attacks when designing their military GPS.


They arn't very easy to avoid, say you capture the signal from a satellite that is not visible to the receiver, or being jammed out, unless you have an extremely high precision clock, you can just delay the signal rebroadcast and spoof away.


Everything involved in GPS requires all the nodes (both the senders and the receivers) to have "extremely high precision clocks."

That's the whole idea, really: you, the receiver, have a clock, and a map of where the various GPS satellites will be around the earth at given times. You "hear" the current time announced from three satellites (along with their station IDs), and compare those times to your clock to figure out the flight time of the data, and thus the distance to the satellites. Then you take the satellites' known positions on the map at the current time, plus the flight times, and triangulate your own position.

If one of the three times you've received is a "lie", then its relative time will correspond to an impossible distance for that GPS satellite to be relative to where the map says it should be (e.g. over the horizon relative to you), and relative to where the other two satellites that you heard from are. (Theoretically you could receive such signals—using reflectors, like HAMs do—but GPS discounts this possibility and just considers it invalid data.)


The vast majority of consumer GPS receivers take their clock from a quartz crystal. The accuracy will be somewhere between 10 parts per million (ppm) and 30 ppm. 5-15 minutes per year of drift - sounds pretty precise, right?

Except when you're measuring the time of flight of signals going close to the speed of light, 10ppm of clock slew gives you 3,000 m/s of clock slew.

That's why GPS receivers actually need to see 4 satellites to get an accurate fix; receivers actually calculate position in four dimensions - x, y, z and time.

Anyway, consequences:

1. The GPS receiver in your phone doesn't have an 'extremely high precision clock' by the standards of high precision clocks.

2. You could mount a replay attack against a receiver introducing error at up to 3km per second in such a way that it won't be readily detectable over other errors in the system.

3. Due to practical issues involved with such a replay attack, it'd probably be possible to crash a drone or misdirect it by a few hundred meters; but incredibly difficult to misdirect it to a distant country or anything like that.


GPS receivers do not have clocks. Atomic clocks are expensive and large; there is no way you get one every device.

Only the satellites have atomic clocks. The receiver get the time from the satellites. It basically compares the time delay between the satellites to determine position and time.


I didn't say they have atomic clocks. They do have clocks, though. Like most computers do. And they are high-precision, and are low-drift enough to predict the locations of satellites as long as they have been re-synchronized within the last few days or so.

Which, as you say, also happens by just observing the time signatures from the satellites. You need four visible satellites to determine your own time, though, whereas you only need three for position, so time isn't re-synched as often as position is calculated. The internal clock in the receiver allows the receiver to carry on tracking with only three time sources for a while.

But, to be clear on the topic of the parent discussion: I believe JDAM missiles (the ones that actually do use GPS) do have either an atomic clock source [more recently], or [formerly] have at least a high-precision monotonic clock source with low drift that is synchronized at point-of-launch by the clock on the bomber, which also has an HPC that was calibrated at its launch by a real atomic clock. They don't need to rely on external time-sync.

And modern ICBMs? Well, unless your jammer/spoofer can keep up with them, or is itself a satellite, you're only going to be able to affect them when they're on their descent course and making final adjustments. And, like this article says (https://www.technologyreview.com/s/423363/how-cruise-missile...), ICBMs have redundant aiming systems based on computer vision applied to either visual-spectrum or radar-based sensors.


Your final point conflates icbms with cruise missiles - very different things. I don’t believe icbms use TERCOM or visual matching.


Nor they use GPS for that matter. ICBMs are completely inertial.


Yeah, I'm not terribly worried about military drones. But as a civilian pilot I worry a lot about whether the non-military-grade GPS in my airplane is telling me the truth.


Hopefully GPS isn't your sole source of information.

When I'm out scrambling I make sure that I carry multiple navigational aids so that I can cross-check. I also pay attention to terrain features before trips as well as during so that I can locate myself, or in the worst case make my way to the handrails that I've identified on my map.


> Hopefully GPS isn't your sole source of information.

Nowadays, in modern airplanes, it often is your only source of positional information when you're in instrument conditions (which is, of course, when it matters most).

> I carry multiple navigational aids

Yeah? Like what?


Compass, Altimeter, topographic maps, notes about the route that I've made. Even a watch can aid navigation if you can calculate your speed and see any terrain features.

If anything doesn't jive with what I expect or what I'm seeing I try to understand why.


Sure, when you're VFR it's pretty easy to notice that your GPS is flying you into a mountain. When you're IFR, not so much.


Maybe there is a market for civilian version of DSMAC?


Why don't planes just carry a good INS? In today's world, it shouldn't be that bad.


Money. INS is still very expensive relative to GPS, and GPS is pretty frickin' reliable 99.99% of the time.


Would commodity six axis sensors have enough precision to do a reasonable job at INS?. Buy a pack of iPhones and use some kind of quorum protocol to reduce the risks associated with one going bad?


> Would commodity six axis sensors have enough precision to do a reasonable job at INS?

I don't know, but my guess would be no. If it were that easy someone would have done it already.

But who knows? Maybe there's an untapped opportunity here. Why don't you buy one of these:

https://vetco.net/products/9-axis-inertial-navigation-module...

and take it with you the next time you fly (or drive) and see how accurate it is?


No, they won't, not at all.


Atomic clocks are $2000 now. SA.45, specifically. And they're smaller.


Great! That is a big difference. Cheap enough for every jet aircraft, most turboprops, and some of the most expensive guided munitions (cruise missiles, or nukes like B.61 Mod 12 for sure).


This probably leads to a great user experience.

However, if this catches on, SMS sniffing over the air is going to really pick up! :p SMS messages are often carried over GSM control channels, generally unencrypted over the air.

Even when they are encrypted, it's only A5/1 (already broken).


Just have the login form submit a token that is associated with the SMS token so you can verify the person sitting in front of the login form is the person who also got the SMS code. Similar to common CSRF protection techniques.

For example, the SMS contains a short token. The login form has a (non-visible) 128-bit random guid. When the form is submitted, both tokens are sent to the server and the server verifies that they are both correct.

It doesn't matter how secure the SMS is, it's only one part of the secret. If it's intercepted, the attacker won't be able to guess the guid. Alternatively, if someone is at a login form and trying to guess the short code, just limit each guid to a small number of attempts before expiring.


Recall that Visa and Chase are major investors. It's rather difficult to "eat their lunch" with them on your board.


I work for a company in this space, and we are in the same boat (major credit card compan[y/ies] investors & on the board). And there is a lot of "we're sticking it to the credit card companies" talk here, which I find hilarious. We pose exactly 0 threat to the credit card industry. If we work out and do extraordinarily well--guess who makes a bunch of money? Credit card companies! If we go bust, we haven't hurt them in the least.

It's funny how we trash-talk the credit cards. Credit card companies are going NOWHERE. If they ditch their plastic cards, that's one thing, but extending credit to people will be a business for a very, very long time. All of the mobile-payments companies (mine included) that I am aware of are still supporting credit cards--they may do fancy footwork to lower their own effective rates, and give some of that savings to their customers, but ultimately, credit card companies are still getting their cut.


As one of the volunteers operating the existing Market St. WiFi and the housing project WiFi, I can say that this is definitely being discussed and evaluated, but I don't have a lot of hope for the Department of Technology making anything happen.

They have deeply-rooted relationships with large telcos and utilities that have granted them good deals on easements to lay their existing fiber and copper paths. If the city started offering competing services for money, it would throw those relationships in jeopardy.

It's really too bad, as a lot of the dark fiber resources are already in place to build a decent backbone that could support radial paths out different neighborhoods. However, there's very little technical clue (if you're a competent network engineer in the Bay Area, why would you work for a city? ick.) and political capital/gumption towards making this happen.

The layer 0 - 7 stuff is easy. It's layer 8 and above (money, politics, humans) that make this hard to accomplish.

If you're an SF resident, call or write your supervisor. Let your opinion be heard and demand proper infrastructure!

Fiber is becoming the new roads; how you get your product to market. Municipalities need to step up and get building, because the big utilities and ISPs sure as hell aren't.


A similar sort of thing happened in New Zealand, ISP's ruled and weren't doing much to upgrade internet speeds. Then the government stepped in and said you gotta get your crap together then forced Telecom (which is now split into multiple companies, Chorus and Visiontek I believe) to build a nationwide fibre network. I believe it's meant to be finished by the end of next year, but it's actually pretty cool how most people who live centrally can get 100mbps (and now 200mbps) fibre from virtually any ISP that offers it.

People say the NZ government is pretty useless (which is still is depending on what topic/area) but I was surprised they actually got it together and got something right for once.


Well, let's just say it's an architecture Square knows well. :)


That has less to do with it than the fact that it was the smallest ISA I could find that GCC would readily compile down to.


I would have loved it to be some old ARM ISA to use it as a testcase for Avatar[0]. On the same topic, FIE paper may be an interesting reading for msp430 lovers[1] (but it needs source for symbolic execution, so doesn't directly apply here).

[0] http://www.s3.eurecom.fr/tools/avatar/

[1] https://www.usenix.org/conference/usenixsecurity13/technical...


In particular, Square's credit card readers use an MSP430 chip to encrypt the stripe data before passing it on the phone.

Their first credit card readers were entirely analog devices, which were very easy to use to skim cards.

Hopefully the latest batches have per-device unique keys (based on some centrally-known KDF) so a compromise of one doesn't re-enable such an exploit.


Just so I can be super clear here: none of the code in this challenge has anything whatsoever to do with anything Square ships. We deliberately made things less realistic to make the levels more fun, and easier to ramp up with.


Hopefully the latest batches have per-device unique keys (based on some centrally-known KDF) so a compromise of one doesn't re-enable such an exploit.

Yes, that's how it works.


Jekyll? There's not much more than a repo and text files on the admin side, but enables super-fast blogging on light hardware.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: