Hacker Timesnew | past | comments | ask | show | jobs | submit | kiproping's commentslogin

What do you currently use for json and batch, I was doing some analysis and my results show that gpt-oss-120b (non batch via openrotuer) is the best for now for my use case, better than gemini-flash models (batch on google). How is your experience?

Everything I do is json and of course you want that json in a specific format so that you can process it further.

Excellent resource. Small bug to report, the table here is broken (BANTU NEGROIDS section) https://britannica11.org/article/01-0358-africa/africa#secti.... Its quite fascinating to read what they thought about Africans as an African.

Thanks, nice catch. The tables can be tricky and I appreciate the heads-up on this markup leak. It will be corrected shortly.

From my casual glance, I can see only few images of particular spots and no timeline so that you can go back in history. Seems pretty rudimentary, like the 15 images you get from EOSDA LandViewer that you can only download a very low resolution thumbnail. Did you find the data helpful?

> Did you find the data helpful?

No.

Its frankly hilarious they think they can seriously put the words "SAR imagery from the world's largest SAR satellite constellation" on their homepage.

If money were being charged for it, some might call it "false advertising".

It looks to me more like a VERY limited subset of images from the satellite constellation in question.

Either that, or the constellation in question is minuscule.

Either way, something doesn't add up.


I’m all for stomping out bait and switch, but "SAR imagery from the world's largest SAR satellite constellation" does not imply that you will get all the imagery they have. Same as if i describe a liquid as “water from the Atlantic” it need not be a particularly impressive amount of water.

> Either way, something doesn't add up.

They are in the business of selling a particular type of data. They are not incentivised to give away their product for free. What you see here is the “first hit is free” kind of sample.


> They are not incentivised to give away their product for free. What you see here is the “first hit is free” kind of sample.

This is exactly what "bait and switch" means.

May I remind you that their website states "No registration. No paywall. Download and start working."


Instead of recompiling the source and installing it again, is there a way to monkey patch the already existing package? It seems like a few lines of code.

That would be incredibly complicated and crash-prone to do.

With nice distros like gentoo, you can just drop the .patch file into an apropriate folder and it'll be applied with every re/install/upgrade

I'd rather more like to imagine a software packaging/distribution regimen whereby dropping a patch(1)-compatible patch file into "an appropriate folder" meant that it would instantly take effect the very next time you run the program.

you can recompile and not install it anywhere, just run the binary you compiled yourself.

There's Merlin and then there's Birdnet too https://birdnet.cornell.edu/. Both by Cornell.


I've been using birdnet, but it seems to want an internet connection to do the identification and sometimes that is dicey when there is a bird that I want to identify. (Also birds seem to shut up around the time you get the app open.)

I'm going to give Merlin a try - the app has UI to download the network for offline use.


Requiring an internet connection for a nature app is absurd. As annoying as it is I get why a big tech company like Google fails at this sort of thing, many of their employees probably never leave a city and so the products always work well for them. But a nature app has no excuse, normal usage will get blocked by that all the time.


That's what Merlin is for but it's a ~450mb install. BirdNet is only a ~30MB install and birds are everywhere, so what's wrong with having an online option for most people who spend most of their time within range of a cel tower?


birdnet pi or go run the model locally with an option to push observations back to Cornell

https://www.birdweather.com/birdnetpi

https://github.com/tphakala/birdnet-go


Tphakala recently also published a cli tool to process recordings, it can also use other models such as Google perch.

https://github.com/tphakala/birda


I recently read about arxiv, it's history and all the mini-drama's around it https://www.wired.com/story/inside-arxiv-most-transformative....

I wonder if Ginsparg is finally retiring and relinquishing access.


Wow, this is a great article! (other archive link - https://archive.is/XVCi7 )

I didn't realize arXiV was started in 1991. And then I wondered why I had never heard of it while I was at Cornell from 1997-2001. Apparently it only assumed the arXiV name in 1999.

I like that it was a bunch of shell scripts :)

Long before arXiv became critical infrastructure for scientific research, it was a collection of shell scripts running on Ginsparg’s NeXT machine.

Interesting connections:

As an undergrad at Harvard, he was classmates with Bill Gates and Steve Ballmer; his older brother was a graduate student at Stanford studying with Terry Winograd, an AI pioneer.

On the move to the web in the early 90's:

He also occasionally consulted with a programmer at the European Organization for Nuclear Research (CERN) named Tim Berners-Lee

And then there was a 1994 move to Perl, and 2022 move to Python ...

Although my favorite/primary language is Python, I can't help but wonder if "rewrite in Python" is mainly a social issue ... i.e. maybe they don't know how to hire Perl programmers and move to the cloud. I guess rewrites are often an incomplete transmission of knowledge about the codebase.


Another tidbit: https://arxiv.org/abs/1706.04188

FAQ 1: Why did you create arXiv if journals already existed? Has it developed as you had expected?

Answer: Conventional journals did not start coming online until the mid to late 1990s. I originally envisioned it as an expedient hack, a quick-and-dirty email-transponder written in csh to provide short-term access to electronic versions of preprints until the existing paper distribution system could catch up, within about three months.

So it was in csh on NeXT. Tim Berners-Lee also developed the web on NeXT!


John Carmack and John Romero also developed the original Doom on NeXT


Is this the same Chitwan with man eating tigers, I read about it in Jim Corbett's book Man eaters of Kumaon.


Exactly


Apparently, the main reason there is interest in this is because China is doing it too.


US tried to kill China's space industry. Look how it went.


They are probably trying to make it look like they got hit by a clever attack, rather than SQLi or a XSS.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: