HN2new | past | comments | ask | show | jobs | submit | anentropic's commentslogin

Yeah...

https://github.com/opengraviton/graviton?tab=readme-ov-file#...

the benchmarks don't show any results for using these larger-than-memory models, only the size difference

it all smells quite sloppy


This vs JTD?

Arguably the original name was the newspeak and the new name is more honest

Do you have a nice way to let it 'use the app' or receive visual feedback?

I imagine that would help the process a lot


8GB RAM is fine for these

topping out at 512GB storage is lame though


8 GB is plenty for the base of this use case but I'd still wish for a 16 GB upgrade option before I'd wish for a 1 TB upgrade option.

Ah, the Lobster guy!

I have been running across that repo for years and wondered if anything was happening with it - great to see an impressive game project built on it now.


Fantastic stuff!

FYI some code snippets are unreadable in 'light mode' ("what substrings does the regex (a|ab)+ match in the following input?")


ah thank you for letting me know, fixed it now!

Do any frameworks manage to use the neural engine cores for that?

Most stuff ends up running Metal -> GPU I thought


It's referring to the neural cores(for matrix mul) in the GPU itself, not the NPU.

https://creativestrategies.com/research/m5-apple-silicon-its...



and Claude is remote inference anyway, just an http api

> 51.0% on Terminal-Bench 2.0, proving its ability to handle sophisticated, long-horizon tasks with unwavering stability

I don't know anything about TerminalBench, but on the face of it a 51% score on a test metric doesn't sound like it would guarantee 'unwavering stability' on sophisticated long-horizon tasks


51% doesn't tell you much by itself. Benchmarks like this are usually not graded on a curve and aren't calibrated so that 100% is the performance level of a qualified human. You could design a superhuman benchmark where 10% was the human level of performance.

Looking at https://www.tbench.ai/leaderboard/terminal-bench/2.0, I see that the current best score is 75%, meaning 51% is ⅔ SOTA.


This is interesting, TFA lists Opus at 59. Which is the same as Claude Code with opus on the page you linked here. But it has Droid agent with Opus scoring 69. Which means the CC harness harness loses Opus 10 points on this benchmark.

I'm reminded of https://swe-rebench.com/ where Opus actually does better without CC. (Roughly same score but half the cost!)


That score is on par with Gemini 3 Flash but these scores look much more affected by the agent used than the model, from scrolling through the results.


Gemini 3 Flash is pure rubbish. It can easily get into loop mode and spout information no different than Markov chain and repeat it over and over.


TerminalBench is like the worst named benchmark. It has almost nothing to do with terminal, but random tools syntax. Also it's not agentic for most tasks if the model memorized some random tool command line flags.


What do you mean? It tests whether the model knows the tools and uses them.


Yeah it's a knowledge benchmark not agentic benchmark.


That's like saying coding benchmarks are about memorizing the language syntax. You have to know what to call when and how. If you get the job done you win.


I am saying the opposite. If a coding benchmark just tests the syntax of a esoteric language, it shouldn't be called coding benchmark.

For a benchmark named terminal bench, I would assume it would require some terminal "interaction", not giving the code and command.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: