HN2new | past | comments | ask | show | jobs | submit | karmakaze's commentslogin

Well you could have a virtual particle whose mass could be time-averaged.

But the 'Fusion' content went from 2 to 10.

That Surface has 16GB RAM though.

> Your new MacBook Neo. Just the way you want it[sic]. 13-inch MacBook Neo in Indigo A18 Pro, 6-core CPU, 5-core GPU, 16-core Neural Engine Apple Intelligence Footnote ※

    8GB unified memory
    256GB SSD storage
    U.S. English Magic Keyboard with Lock Key
    20W USB‑C Power Adapter

    Two USB-C ports, 3.5 mm headphone jack
    Support for one external display
8 GB unified memory is brand-new e-waste today. macOS 26 makes it even worse.

> 8 GB unified memory is brand-new e-waste today. macOS 26 makes it even worse.

One reason Apple can get away with 8 GB of RAM is their SoC does realtime compression of data in RAM and they use high bandwidth memory; the A19 Pro RAM bandwidth is 60 GB/s. This enables them to treat the SSD like an L3 cache.

It's nearly 5 years since the M1 was released; I suspect Apple has gotten really good with their RAM > compression > SSD system since then.


I will take MKBHD's take on this[0]. Great as a higher-end 'chromebook' etc. Could be an upgrade for my Surface Go 3 but not as portable. Definitely more useful than a tablet.

[0] https://www.youtube.com/watch?v=kBX5WH9b4M4


I'm going to give Apple the benefit of the doubt here until proven otherwise. I can't see them releasing something with a terrible user experience as it would cause a lot of reputational harm.

> I can't see them releasing something with a terrible user experience

I see you haven't upgraded to Tahoe yet!


I don't know what apps you run, but I'm typing this from an M2 Mac with 8 GB, running Tahoe. Performance is fine. It's always been fine.

We don't have a Qwen3.5-Coder to compare with, but there is a chart comparing Qwen3.5 to Qwen3 including Qwen3-Next[0].

[0] https://www.reddit.com/r/LocalLLaMA/comments/1rivckt/visuali...


I would much prefer vibe-coding to be used at the highest layers, not the substrate that we all depend on.

Chart of how these compare[0] to the Qwen3 235B-A22B, Next-80B-A3B-Thinking, 30B-A3B-Thinking, 4B, 1.7B models.

These new ones are very much punching above their weights.

[0] https://www.reddit.com/r/LocalLLaMA/comments/1rivckt/visuali...


Unsloth is one of, if not the most well-known provider of model quantizations. The release post of course should reference the source, but most probably use unsloth or bartowski quantized models being my go-tos so relevant/convenient.

Likewise. You should read about the Cerebras WSE configurable colour channel mesh.


Annoyingly there's already a framework called Vert.x for JVM but there's also Vert.x Node.js

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: