Hacker Timesnew | past | comments | ask | show | jobs | submit | buildbot's commentslogin

You can even train in 4 & 8 bits with newer microscaled formats! From https://arxiv.org/pdf/2310.10537 to gpt-oss being trained (partially) natively in MXFP4 - https://huggingface.co/blog/RakshitAralimatti/learn-ai-with-...

To Nemotron 3 Super, which had 25T of nvfp4 native pretraining! https://docs.nvidia.com/nemotron/0.1.0/nemotron/super3/pretr...


This is a thing! For example, https://arxiv.org/abs/2511.06516

that's brilliant, I wonder why we haven't seen much use of it to do very heavy quantization

Also there are the Phase One Achromatic backs. Which Lightroom does not even support :(

I need to fix Phase One support in Filmulator. LibRaw has some additional processing steps required that I didn't manage to figure out last time I worked on it.

Lightroom doesn't even process IQ4 150 (RGB) files correctly either, there's in back calibration that is missing, resulting in a bunch of lower right corner amp glow(?).

Capture One with the same back is fine/the back went to Japan to get repaired only a few months ago and has a brand new main controller board/calibration...


I've got iq180 and iq4 150mp working… it doesn't like iq3 100mp though.

Yep a physical print is a totally different experience too, compared to an image on the screen.

With good printing software like imageprint RED/Black (NB very expensive and overkill for most) you can actually see the effect different papers, settings, and lighting will have before the print. Very fun!


Every professor has their own style, most of the ones I had were very open that office hours were a pretty great way to get help/more targeted hints on what to study. This isn’t in my opinion, a problem. Their goal is to educate you as best as possible in theory, via classes, homework, and office hours. Students who take the time and effort to attend office hours clearly want to at least pretend to be putting in extra effort, so why wouldn’t they out more effort into helping them learn? I doubt that they are directly giving away test answers.

In an astrophysics class I had in college , the professor called on a student to solve a problem, he got it wrong, and the professor said "if you would come to my office hours you would know how to solve this" - the students response was something along the lines of "sorry, my parents are crackheads so I need to work two jobs to pay for school"

Plot twist - the student's dad was the professor.

I think once they start having homework in kindergarten "doing all the class work during class" is a goal that won't be reached.


With all empathy that sucks and is not fair - but should office hours be removed because one student could not attend?

Many of the professors I have worked with that I respect have different methods for helping these students- for example sending them an email after class, offering explicit direct help & advice. Or connecting them with a better job, or a research position.


No, office hours shouldn't be removed. Perhaps professors should just not reward people who come to office hours beyond the extra instruction that is given. Eg no special knowledge communicated solely in office hours("this question is going to be on the test next week"), and no special treatment ("this student got the wrong answer but I know from office hours they're trying really hard so I'll give them some extra points")

Fully agree, a hint should be at most about which subjects to focus on - and ideally that’s something they said in class too.

For many profs the goal is to spend as little as possible time with students, so that they can spend more time with their research career, than lectures for students or hours after the lectures. For that they employ students, who have passed the exams. Teaching is merely a necessary annoyance to them.

The problem with targeted exam hints on what to study is that it can create situations where a student who understood the material better overall and put in more effort studying all the material equally, scores lower than someone who simply happened to get hints from the professor. If your actual goal is to educate, you shouldn't give exam hints, and especially not one-on-one.

For sure - I think my definition of an exam hint is more like, study these subjects or “yes, data structures of some kind will be tested, but don’t worry about sorting algorithms.”

Not specific hints or answers, that’s obvious favoritism


It’s LLM slop and very shallow, in my opinion.

whats shallow about the research? it all seems to check out?

Thank you.

Every time I complain about this kind of useless AI slop I get downvoted to hell and get dozens of comments saying "it doesn't look AI at all", so I don't even bother anymore. It's incredibly sad, I expected much more from this community... But it looks like it'll soon be dead like the rest of the internet.


The blogpost?

ctrl-f for "This isn't" and note how many instances of this pattern there are:

> This isn't X. It's Y.


I don't see a single occurrence in the article of the word "isn't".

> That means the lock-in isn’t just product strategy. It’s also architecture.

> And that omission isn’t some harmless simplification. It’s the entire trick.

It isn't just once. It's—twice. ;)


Also stuff like this:

>That’s not exotic. That’s just model parallelism with extra suffering.

>That’s not product magic. That’s a checkbox.

What really triggers my internal AI slop detector is this:

>Their renders. Their prototype shots. Their exploded views. Their spec sheet.

>Nobody asked what silicon was inside. Nobody asked how 120B on LPDDR5X was supposed to work. Nobody spent

>No cloud. No GPU. No subscriptions.

>wrong class of chip, wrong power envelope, wrong everything

>The visual geometry matches. The licensing model matches. The China-based semiconductor ecosystem match

>Real researchers. Real papers. Real contributions.

LLMs love to overuse this pattern.


This also smells of an autoregressive model trying to make a point that TiinyAI simply forked another repo and claimed as their own invention, before realizing mid-paragraph it's by the same people:

>So no, TiinyAI did not “launch” PowerInfer. SJTU researchers did.

>TiinyAI’s GitHub repo is a fork of the original PowerInfer repository. At least one of the original academic authors appears tied to the code history. So there is clearly some real overlap between the research world and the product world.


Oof, thanks! (I'm going to blame it on my Android Chrome "find in page" tool not working as expected, and I apologize)

It’s a play on Taylor Swift lyric I think - “Cause, darling, I'm a nightmare dressed like a daydream“ (Blank Space)

Just the sort of clever word play an algorithm would come up with!

Very well could be.

I asked my little Claude Code API tool, it answered 42 then it (the API) decided to run bash and get a real random number?

'>cs gib random number

Here's a random number for you:

42

Just kidding — let me actually generate a proper random one: Your random number is: 14,861

Want a different range, more numbers, or something specific? Just say the word!'


It picks 42 as the default integer value any time it writes sample programs. I guess it comes from being trained using code written by thousands upon thousands of Douglas Adams fans.

Basically every ml script I see has 42 as the default seed, even before LLMs. Pretty sure it was what I used for my thesis code haha. So not surprising it always picks it.

The x-clacks-overhead of LLMs, perhaps.

Just poor ones - how much could it cost to get a scan of the oceans once weekly or daily? 10 million dollars?

Actually probably even cheaper, a generic scan to spot all the ships, and when it's done, just need to get images around the last location. Probably can use something like the Planet API

Which to me is a really good, encouraging thing.

Overall I feel safer in a Waymo than a rideshare now and I only spent a few days being able to use Waymo...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: