HN2new | past | comments | ask | show | jobs | submit | highlightslogin
If you run across a great HN comment or subthread, please tell us at hn@ycombinator.com so we can add it here!

I've told this story before on HN, but my biz partner at ArenaNet, Mike O'Brien (creator of battle.net) wrote a system in Guild Wars circa 2004 that detected bitflips as part of our bug triage process, because we'd regularly get bug reports from game clients that made no sense.

Every frame (i.e. ~60FPS) Guild Wars would allocate random memory, run math-heavy computations, and compare the results with a table of known values. Around 1 out of 1000 computers would fail this test!

We'd save the test result to the registry and include the result in automated bug reports.

The common causes we discovered for the problem were:

- overclocked CPU

- bad memory wait-state configuration

- underpowered power supply

- overheating due to under-specced cooling fans or dusty intakes

These problems occurred because Guild Wars was rendering outdoor terrain, and so pushed a lot of polygons compared to many other 3d games of that era (which can clip extensively using binary-space partitioning, portals, etc. that don't work so well for outdoor stuff). So the game caused computers to run hot.

Several years later I learned that Dell computers had larger-than-reasonable analog component problems because Dell sourced the absolute cheapest stuff for their computers; I expect that was also a cause.

And then a few more years on I learned about RowHammer attacks on memory, which was likely another cause -- the math computations we used were designed to hit a memory row quite frequently.

Sometimes I'm amazed that computers even work at all!

Incidentally, my contribution to all this was to write code to launch the browser upon test-failure, and load up a web page telling players to clean out their dusty computer fan-intakes.


In the 1990s, in the UK, my secondary school English teacher, who had Shakespearian actor vibes and wore dark tweed trousers and a plain white shirt—imagine Patrick Stewart if you may—brought this poem to life in my class by vividly reenacting a soldier dying from mustard gas poisoning by falling onto a desk and flailing about in front of the stunnned students sitting at it. I've never forgotten the closing line since.

I attached a generator with some supercaps and an inverter to a stationary bicycle a few years ago, and even though I mostly use it as a way to feel less guilty watching Youtube videos, it does give me a quite literal feel for some of the items on the lower end of the scale.

- Anything even even halfway approaching a toaster or something with a heater in it is essentially impossible (yes, I know about that one video).

- A vacuum cleaner can be run for about 30 seconds every couple minutes.

- LED lights are really good, you can charge up the caps for a minute and then get some minutes of light without pedaling.

- Maybe I could keep pace with a fridge, but not for a whole day.

- I can do a 3D printer with the heated bed turned off, but you have to keep pedaling for the entire print duration, so you probably wouldn't want to do a 4 hour print. I have a benchy made on 100% human power.

- A laptop and a medium sized floor fan is what I typically run most days.

- A modern laptop alone, with the battery removed and playing a video is "too easy", as is a few LED bulbs or a CFL. An incandescent isn't difficult but why would you?

- A cellphone you could probably run in your sleep

Also gives a good perspective on how much better power plants are at this than me. All I've made in 4 years could be made by my local one in about 10 seconds, and cost a few dollars.


Author of LFortran here. The historical answer is that both LFortran and Flang started the same year, possibly the very same month (November 2017), and for a while we didn't know about each other. After that both teams looked at the other compiler and didn't think it could do what they wanted, so continued on their current endeavor. We tried to collaborate on several fronts, but it's hard in practice, because the compiler internals are different.

I can only talk about my own motivation to continue developing and delivering LFortran. Flang is great, but on its own I do not think it will be enough to fix Fortran. What I want as a user is a compiler that is fast to compile itself (under 30s for LFortran on my Apple M4, and even that is at least 10x too long for me, but we would need to switch from C++ to C, which we might later), that is very easy to contribute to, that can compile Fortran codes as fast as possible (LLVM is unfortunately the bottleneck here, so we are also developing a custom backend that does not use LLVM that is 10x faster), that has good runtime performance (LLVM is great here), that can be interactive (runs in Jupyter notebooks), that creates lean (small) binaries, that fully runs in the browser (both the compiler and the generated code), that has various extensions that users have been asking for, etc. The list is long.

Finally, I have not seen Fortran users complaining that there is more than one compiler. On the contrary, everybody seems very excited that they will soon have several independent high-quality open source compilers. I think it is essential for a healthy language ecosystem to have many good compilers.


I remember that. A few weeks later ran a script to count all the websites on the Internet.. 324 at that time.

No problem. You might be better off moving back, yes.

My understanding of immix-style collection is that it divides the heap into blocks and lines. A block is only compacted/reused if every object in it is dead, and so if you mix lifetimes (i.e. lots of short-lived requests, medium-life sessions, long-life db connections/caches/interned symbols) then you tend to fill up blocks with a mix of short and long-lived objects as users log in and make requests.

When the requests get de-allocated the session remains (because the user closed the tab but didn't log out, for example, so the session is still valid) and so you end up with a bunch of blocks that are partially occupied by long-lived objects, and this is what drives fragmentation because live objects don't get moved/compacted/de-fragged very often. Eventually you fill up your entire heap with partially-allocated blocks and there is no single contiguous span of memory large enough to fit a new allocation and the allocator shits its pants.

So if that's what the HN backend looks like architecturally (mixed lifetimes), then you'd probably benefit from the old GC because when it collects, it copies all live objects into new memory and you get defragmentation "for free" as a byproduct. Obviously it's doing more writing so pauses can be more pronounced, but I feel like for a webapp that might be a good trade-off.

Alternatively you can allocate into dedicated arenas based on lifetime. That might be the best solution, at the expense of more engineering. Profiling and testing would tell you for sure.


As someone who worked on kindle power consumption many years ago: One of the (by far) biggest consumers of power is the WiFi connection. It has to wake up and respond to the AP in order to not get disconnected every x seconds.

Off the top of my head, I think 'on' average power consumption was ~700uA without wifi, and about 1.5mA+/- with Wifi. This is from over a decade ago, so my memory is fuzzy though...

Obviously, page changes used relatively large amounts of power. I don't recall the exact amounts, but it was 100s of mA for seconds.

There is also an "every x pages, do a full screen refresh (black to white)" to fix up the ghosting issue that the article writer saw.


Older HN users may recall when busy discussions had comments split across several pages. This is because the Arc [1] language that HN runs on was originally hosted on top of Racket [2] and the implementation was too slow to handle giant discussions at HN scale. Around September 2024 Dang et al finished porting Arc to SBCL, and performance increased so much that even the largest discussions no longer need splitting. The server is unresponsive/restarting a lot less frequently since these changes, too, despite continued growth in traffic and comments:

http://hackertimes.com/item?id=41679215

[1] https://paulgraham.com/arc.html

[2] https://racket-lang.org/


I'm not sure if I'm the one to blame for this or not, but the earliest reference to ".gitkeep" I can find online is my 2010 answer on Stack Overflow: https://stackoverflow.com/a/4250082/28422

If this is all my fault, I'm sorry.


Way back in ~2008 I wrote the Newton Virus https://www.everita.com/how-the-newton-virus-was-made + https://www.youtube.com/watch?v=eh75j6OHhRc (sorry for the broken images, need to update that site). Between that and using a hidden API to take screenshots of each individual element on your desktop (from icons, to taskbar, to windows) the effect was pretty believable. One of the most fun (and frustrating) projects I ever worked on.

Specifically, Overload was made by Mike Kulas and Matt Toschlog, who were the original Descent developers. There were also major contributions from people like Dan Wentz (who worked on Descent 3) and from people who spent a lot of time playing the original game, like me and my wife (our 3 sons are all named for friends we know from Descent.)

The culture of my home island of Mallorca has a pretty deep link to slings, the ancient Greeks and Carthaginians both named us after our slingers, and later on we became a key Roman foothold in the punic wars partly because of the slingers, who became part of an elite unit of shock troops in the Roman Empire

It was our weapon of choice for defence, protecting us from pirates and would-be conquerors as well as farming, as shepherds used both slings and dogs to herd and protect their animals.

I find it pretty fascinating, I'm also a terrible shot with a sling, you have to try it to really understand how hard it is to aim when swinging a rock at something.


I have a Ph.D. in a field of mathematics in which complex numbers are fundamental, but I have a real philosophical problem with complex numbers. In particular, they arose historically as a tool for solving polynomial equations. Is this the shadow of something natural that we just couldn't see, or just a convenience?

As the "evidence" piles up, in further mathematics, physics, and the interactions of the two, I still never got to the point at the core where I thought complex numbers were a certain fundamental concept, or just a convenient tool for expressing and calculating a variety of things. It's more than just a coincidence, for sure, but the philosophical part of my mind is not at ease with it.

I doubt anyone could make a reply to this comment that would make me feel any better about it. Indeed, I believe real numbers to be completely natural, but far greater mathematicians than I found them objectionable only a hundred years ago, and demonstrated that mathematics is rich and nuanced even when you assume that they don't exist in the form we think of them today.


> “What happens when you type a URL into your browser’s address bar and hit enter?” You can talk about what happens at all sorts of different levels (e.g., HTTP, DNS, TCP, IP, …). But does anybody really understand all of the levels? [Paraphrasing]: interrupts, 802.11ax modulation scheme, QAM, memory models, garbage collection, field effect transistors...

To a reasonable degree, yes, I can. I am also probably an outlier, and the product of various careers, with a small dose of autism sprinkled in. My first career was as a Submarine Nuclear Electronics Technician / Reactor Operator in the U.S. Navy. As part of that training curriculum, I was taught electronics theory, troubleshooting, and repair, which begins with "these are electrons" and ends with "you can now troubleshoot a VMEbus [0] Motorola 68000-based system down to the component level." I also later went back to teach at that school, and rewrote the 68000 training curriculum to use the Intel 386 (progress, eh?).

Additionally, all submariners are required to undergo an oral board before being qualified, and analogous questions like that are extremely common, e.g. "I am a drop of seawater. How do I turn the light on in your rack?" To answer that question, you end up drawing (from memory) an enormous amount of systems and connecting them together, replete with the correct valve numbers and electrical buses, as well as explaining how all of them work, and going down various rabbit holes as the board members see fit, like the throttling characteristics of a gate valve (sub-optimal). If it's written down somewhere, or can be derived, it's fair game. And like TFA's discussion about Brendan Gregg's practice of finding someone's knowledge limit, the board members will not stop until they find something you don't know - at which point you are required to find it out, and get back to them.

When I got into tech, I applied this same mindset. If I don't know something, I find out. I read docs, I read man pages, I test assumptions, I tinker, I experiment. This has served me well over the years, with seemingly random knowledge surfacing during an incident, or when troubleshooting. I usually don't remember all of it, but I remember enough to find the source docs again and refresh my memory.

0: https://en.wikipedia.org/wiki/VMEbus


Excellent, I’ll look back at it when I find time to progress on my own pet project of custom numeral notation.

Specification: each cipher from 0 to 9 is using n+1 stroke, using only horizontal, vertical or left-bottom to right-top orientations. They also have distinctive traits that let them be identified even with only the very top or very bottom part. So 0 looks like / for example.

I also went above base 10 and made enough glyph to cover a sexagesimal base. The constraints on the drawing are then looser, so to go up to 20 we had a single "\". Incidentally and funnily enough that makes the 10th glyph look like a X. Then to get up to 40 glyphs, we simply square the 20 previous ones. And the plan to reach the 60 glyphs is to have a circled variation.

I’m not versed in the art of font crafting, but I would love to find someone to work in common on that one.

Here is a draft with the first steps for those interested:

https://commons.wikimedia.org/wiki/File:Alternative_cipher_n...


I designed and 3D-printed my own slide rule to help me play Balatro!

Balatro is a roguelike survival game where you need to multiply "chips" and "mult" together to meet a requirement each round. You get three chances to draft enough resources to survive. I designed my own slide rule to help with the mental multiplication - most of the fun of the game comes from the mechanics being slightly obscured from the player.

Since I designed this slide rule myself, I was able to make a couple unconventional design choices that fit my needs. For instance, mine has three octaves so it can represent numbers within the ones, thousands, or millions' range, for example; no need to track arbitrary powers of ten. Since it's a rotary rule, it wraps around. Eg. 353×24 shows on the device as 8.47, so you can think of it as 8.47 thousand, for example.

Holding a physical object in my hands while playing helps more than I thought it would. Should I take a card that increases chips by 600 or increases mult by 1.3×? Do I need to take a card to clear the blind in the short term, or do I have enough resources to draft a slower card that will scale better over time? Even just looking at how densely packed the marks are on the "Chips" side vs the "Mult" side of the device gives a visceral physical sense of what my build needs to focus on.

Pictures and .STL: https://www.printables.com/model/1026662-jimbos-rotary-slide...

Github repository: https://github.com/gcr/balatro-slide-rule

The actual plotting code used Marimo notebooks, which host a python in your browser via WASM. Take a look here: https://marimo.app/l/4i15d7

I entered it in Printables’ educational tools competition but the other entries were cooler. Maybe HN might like it. :-)


A very excellent question and totally reasonable thing not to know (congrats on being one of today's lucky 10,000!)

I'm speaking from the perspective of US coins because that's what I specialize in but this generally applies to coins all over the world as well:

Prior to (and including) 1964, US 10c, 25c, 50c, (and when they were made, $1) coins were made of 90% silver. We made A LOT of these, so in terms of outright rarity, most are not rare. Today they're referred to as "junk silver" because in terms of collectibility, they're junk, but the 90% silver content means there's some inherent precious metal value (as of this moment on Jan 30, 2026, they have ~approximately~ 60x their face value in silver content, eg $6, $15, $30, and $60 in silver respectively.)

So that's their basal value that fluctuates with the silver market. But the next layer is actual rarity / collectibility -- if a given coin is desirable enough that it surpasses its metal content, you get a different set of values.

Now to your actual question: Do they get smelted/melted down? The answer is...sometimes. They trade somewhat like financial instruments, based on the assumption that you could melt them down (and there's a cost to doing so), so that's how people value the various silver coins. In reality, there's usually enough demand from people who want to hold physical silver in various forms that they don't actually need to be melted down.

There's obviously a lot more to it, but that's the 5c version ;)


I had the privilege of working with Don back at JPL at the time he invented the rocker bogey. (I wrote the software for the first prototype with a computer on board.) Not only was he brilliant, he was also a really nice guy. I didn't appreciate at the time how rare that combination of traits is among humans.

To my astonishment, it turns out Don doesn't have a Wikipedia page (though the rocker bogie suspension does).


Winapp, it really whips the LLMas ass

That is true of press, weld, and paint stages, which gives you a chassis and nothing else. It is absolutely not lights out for "final assembly" which despite the name is how massive amounts of the car comes together.

Robots are great at the bulk movement required for sticking sheet metal into huge stamps as well as repeatably welding the output of these stamps together. Early paint stages happens by dipping this whole chassis and later obviously benefits highly from environmental control (paint section is usually certain staff only to enter.)

But with this big painted chassis you still need to mount the engine/transmission, the brake and suspension assembly needs installing, lots of connectors need plugging in for ABS- and supporting all the connectors that will need plugging in is a lot of cabling that needs routing around this chassis. These tasks are very difficult for robots to do, so they tend to be people with mechanical assists, e.g. special hoisting system that takes the weight of engine/trans while the operators (usually two on a stage like this, this all happens on a rolling assembly line) drag it into place, and do the bolting.

Trim line is also huge, insert all these floppy roof liners, install the squishy plastic dashboard, the seats, carpets, door plastic trim, plug in all your speakers and infotainment stuff, again the output of the automated stages is literally the shell of a car, and robots are extremely bad at doing precise clipping together of soft touch plastics or connection of tiny cables. Windshield install happens here too, again these things are mechanically assisted for worker ergonomics but far from automated.

Each of these subassemblies also can be very complex and require lots of manual work too but that usually happens at OEM factories not at the assembly factory. Automation in these staffed areas mostly is the AGVs which follow lines on the floor to automatically deliver kanban boxes which are QR tagged (the origin of the QR code, fun fact) to ensure JIT delivery of the parts needed for each pitch.

It is far from lights out even in the most modern assembly plant and I think it will be a long time until that is true. The amount of poka-yoking that goes into things like connector design so there is an audible "click" when something is properly inserted for example- making a robot able to perform that task at anywhere near the quality of even a young child will take vast amounts of advancement in artificial intelligence and sensing. These are not particularly skilled jobs but the robotics skill required is an order of magnitude more than we can accomplish with today's technology.


Just because I don't often get a chance to talk about this, I'll mention that there was a malfunction/accident/bug that caused what you might call spoofed signals to go out around Long Island and New York. Really interesting case where it seems that an FAA system wasn't handling magnetic declination correctly, which led to it generating false TIS-B targets that were rotated 13 degrees from real aircraft positions, from the radar antenna point of view: https://x.com/lemonodor/status/1508505542423064578

(TIS-B is a system that broadcasts ADS-B-like signals for aircraft that are being tracked by radar but either don't have ADS-B Out or otherwise might not be picked up by other aircraft with ADS-B In, e.g. maybe they're at a low altitude.)

There have been a couple other incidents with the TIS-B system. E.g. this apparent test near Dallas in 2022 that generated dozens of false targets in an interesting pattern: https://x.com/lemonodor/status/1481712428932997122 There was a similar incident around LAX several months later.


For what it’s worth, Philo Farnsworth and John Logie Baird were friendly with each other. I was lucky to know Philo’s wife Pem very well in the last part of her life, and she spoke highly of Baird as a person.

David Sarnoff and RCA was an entirely different matter, of course…


Back in 2011 (!) I went to a wedding in Denia, a medium-sized town on the Mediterranean coast of Spain.

The day after the wedding we went to a restaurant by the sea to have some hangover paella, part of the wedding celebrations. Weddings in Spain are usually 2 or 3 day affairs. Anyway, since we were travelling back to Madrid later that day we left our luggage in the trunk of the car, not visible from the outside. We locked the doors and off for paella.

Or so we thought: some bad guys were jamming the car key frequencies so the car didn’t actually lock. They hit jackpot with my bag: my Canon IXUS camera (I loved that camera), my Kindle 3G, my MacBook Pro and my iPad… with 3G.

When we found out later that day we went to the local Guardia Civil and told them the story. I opened “Find My” on my phone and told them exactly where the bad guys were, all the way in Valencia already.

You should have seen the face of the two-days-shy-from-retiring officer when I told him that my iPad was connected to the internet and broadcasting its location continuously. Remember this was 2011.

So they sent a police car to check out the area and found a suspiciously hot car. They noted it down and did some old-fashioned policing the rest of the summer. Two months later I got a call: they had found them and waited on them to continue stealing using the same MO, until they had a large enough stash that they could be charged with a worse crime.

They had found my bag, my MacBook and my iPad. The smaller items had already been sold on the black market.

It still is one of my favourite hacker stories. I went to court as a witness and retold the whole thing. The look on the judge’s face was also priceless.


Old scanners were SCSI, which made me wonder if you could use them as boot devices, if you could stuff the scanner driver and OCR software into the BIOS. Might be easier now that we have uEFI.

Same here. Farmer now, former network engineer and software project lead, but I stopped programming almost 20 years ago.

Now I build all sorts of apps for my farm and organizations I volunteer for. I can pound out an app for tracking sample locations for our forage associations soil sample truck, another for moisture monitoring, a fleet task/calendar/maintenance app in hours and iterate on them when I think of features.

And git was brand new when I left the industry, so I only started using it recently to any extent, and holy hell, is it ever awesome!

I'm finally able to build all the ideas I come up with when I'm sitting in a tractor and the GPS is steering.

Seriously exciting. I have a hard time getting enough sleep because I hammer away on new ideas I can't tear myself away from.


Something that may be interesting for the reader of this thread: this project was possible only once I started to tell Opus that it needed to take a file with all the implementation notes, and also accumulating all the things we discovered during the development process. And also, the file had clear instructions to be taken updated, and to be processed ASAP after context compaction. This kinda enabled Opus to do such a big coding task in a reasonable amount of time without loosing track. Check the file IMPLEMENTATION_NOTES.md in the GitHub repo for more info.

They have come a very long way since the late 1990s when I was working there as a sysadmin and the data center was a couple of racks plus a tape robot in a back room of the Presidio office with an alarmingly slanted floor. The tape robot vendor had to come out and recalibrate the tape drives more often than I might have wanted.

I found out my crimson-bellied conure is laying an egg today! She's nesting in some towels now, chirping away while she works on laying it.

Having an egg is relatively hard on parrots. I've given her lots of food and warmth to prepare. She is comically hungry -- she's usually not such a big eater, but she's happy today to be scarfing down her apple slices, fruit pellets, and safflower seeds.

She usually sleeps at the bottom of her cage, beneath a towel I put down for her. It's already unusual for parrots! But tonight she has made quite a nest with her towel: It's folded in half like usual, but she has nuzzled her way between the fold, so she has the towel underneath and on top of her. It's super cute.

I'm treating her with delicacy but she is determined to be a wild child of a bird. She's still flying around during the day and moving around plenty. I don't think I would be so confident if I had an egg like that inside me.

She has a stone perch that she likes to nibble on when she's working on an egg. I've wondered if it is some innate need to nourish herself with calcium, or if it's stress relief :)

So that's my night. Sitting outside of the metaphorical delivery ward with a metaphorical cigar, making sure she lays this egg that isn't even fertile to begin with! Birds :)


to deploy a 2nd hand Cray-1 at UQ, we had to raise the ex-IBM 3033 floor, it turned out the bend radius for flourinert was NOT the same as a water cooled machine. We also installed a voltage re-generator which is basically a huge spinning mass, you convert Australian volts to DC, spin the machine, and take off re-generated high frequency volts for the cray, as well as 110v on the right hz for boring stuff alongside. the main bit ran off something like 400hz power, for some reason the CPU needed faster mains volts going in.

The flourinert tank has a ball valve, like a toilet cistern. we hung a plastic lobster in ours, because we called the cray "Yabbie" (Queensland freshwater crayfish)

That re-generator, the circuit breakers are .. touchy. the installation engineer nearly wet his trousers flipping on, the spark-bang was immense. Brown trouser moment.

The front end access was Unisys X11 Unix terminals. They were built like a brick shithouse (to use the australianism) but were a nice machine. I did the acceptance testing, it included running up X11 and compiling and running the largest Conways game of life design I could find on the net. Seemed to run well.

We got the machine as a tax-offset for a large Boeing purchase by Australian defence. End of life, one of the operators got the love-seat and turned it into a wardrobe in his bedroom.

Another, more boring cray got installed at department of primary industries (Qld government) to do crops and weather modelling. The post cray-1 stuff was .. more ordinary. Circular compute unit was a moment in time.

(I think I've posted most of this to HN before)


I worked on Finder/TimeMachine/Spotlight/iOS at Apple from 2000-2007. I worked closely with Bas Ording, Stephen Lemay, Marcel van Os, Imran Chaudry, Don Lindsey and Greg Christie. I have no experience with any of the designers who arrived in the post-Steve era. During my time, Jony Ive didn't figure prominently in the UI design, although echoes of his industrial design appeared in various ways in the graphic design of the widgets. Kevin Tiene and Scott Forstall had more influence for better or worse, extreme skeumorphism for example.

The UX group would present work to Steve J. every Thursday and Steve quickly passed judgement often harshly and without a lot of feedback, leading to even longer meetings afterward to try and determine course corrections. Steve J. and Bas were on the same wavelength and a lot of what Bas would show had been worked on directly with Steve before hand. Other things would be presented for the first time, and Steve could be pretty harsh. Don, Greg, Scott, Kevin would push back and get abused, but they took the abuse and could make in-roads.

Here is my snapshot of Stephen from the time. He presented the UI ideas for the intial tabbed window interface in Safari. He had multiple design ideas and Steve dismissed them quickly and harshly. Me recollection was that Steve said something like No, next, worse, next, even worse, next, no. Why don't you come back next week with something better. Stephen didn't push back, say much, just went ok and that was that. I think Greg was the team manager at the time and pushed Steve for more input and maybe got some. This was my general observation of how Stephen was over 20 years ago.

I am skeptical and doubtful about Stephen's ability to make a change unless he is facilitated greatly by someone else or has somehow changed drastically. The fact that he has been on the team while the general opinion of Apple UX quality has degraded to the current point of the Tahoe disaster is telling. Several team members paid dearly in emotional abuse under Steve and decided to leave rather than deal with the environment post Steve's death. Stephen is a SJ-era original and should have been able to push hard against what many of us perceive as very poor decisons. He either agreed with those decisions, or did not, and choose to go with the flow and enjoy the benefits of working at Apple. This is fine I guess. Many people are just fine going with the flow and not rocking the boat. It may be even easier when you have Apple-level comp and benefits.

My opinon; unless Stephen gets a very strong push from other forces, I don't see that he has the will or fortitude to make the changes that he himself has approved in one way or another. Who will push him? Tim Cook, Craig Federighi, Eddy Cue, Phil Schiller? The perceived mess of Tahoe happened on the watch of all of these Apple leaders.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: