Would it be accurate to say that Meta currently produces more RISC-V chips than other vendors? The specs for those chips look extremely interesting and seem much more programmable than Google's TPUs. It would be cool to see Meta making them available to third parties.
8K QPS is probably quite trivial on their setup and a 10M dataset. I rarely use comparably small instances & datasets in my benchmarks, but on 100M-1B datasets on a larger dual-socket server, 100K QPS was easily achievable in 2023: https://www.unum.cloud/blog/2023-11-07-scaling-vector-search... ;)
Typically, the recipe is to keep the hot parts of the data structure in SRAM in CPU caches and a lot of SIMD. At the time of those measurements, USearch used ~100 custom kernels for different data types, similarity metrics, and hardware platforms. The upcoming release of the underlying SimSIMD micro-kernels project will push this number beyond 1000. So we should be able to squeeze a lot more performance later this year.
Author here. Appreciate the context—just wanted to add some perspective on the 8K QPS figure: in the VectorDBBench setting we used (10M, 768d, on comparable hardware to the previous leader), we're seeing double their throughput—so it's far from trivial on that playing field.
That said, self-reported numbers only go so far—it'd be great to see USearch in more third-party benchmarks like VectorDBBench or ANN-Benchmarks. Those would make for a much more interesting comparison!
On the technical side, USearch has some impressive work, and you're right that SIMD and cache optimization are well-established techniques (definitely part of our toolbox too). Curious about your setup though—vector search has a pretty uniform compute pattern, so while 100+ custom kernels are great for adapting to different hardware (something we're also pursuing), I suspect most of the gain usually comes from a core set of techniques, especially when you're optimizing for peak QPS on a given machine and index type. Looking forward to seeing what your upcoming release brings!
Every founder probably dreams of a press release like this — complete with testimonials from the CEOs of OpenAI, Anthropic, Meta, xAI, Microsoft, CoreWeave, AWS, Google, Oracle, Dell, HPE, and Lenovo.
There aren’t many technical details about the new GPUs yet, but the notes on the Vera CPU caught my eye. NVIDIA Spatial Multithreading sounds like their take on SMT — something you don’t usually see on Arm-based designs. Native FP8 support is also notable, though it’s still unclear how it will be exposed to developers in practice.
Overall it looks like an interesting CPU, but it doesn’t feel like it’s in the same league as the rumored Apple M5 Ultra.
PTX is on the GPU side and is already supported on available models. On the CPU side, it must be some form of an Arm ISA extension, I believe, like NEON-FHM or SVE-AES… I'm just not sure what the scope of those extensions would be and how they will coexist with ARM’s other extensions.
I’m currently using a mix of Zed, Sublime, and VS Code.
The biggest missing piece in Zed for my workflow right now is side-by-side diffs. There’s an open discussion about it, though it hasn’t seen much activity recently: https://github.com/zed-industries/zed/discussions/26770
Stronger support for GDB/LLDB and broader C/C++ tooling would also be a big win.
It’s pretty wild how bloated most software has become. Huge thanks to the people behind Zed and Sublime for actively pushing in the opposite direction!
> The biggest missing piece in Zed for my workflow right now is side-by-side diffs.
> It’s pretty wild how bloated most software has become.
It's a bit ironic to see those two in the same message but I'd argue that right there is an example of why software becomes bloated. There is always someone who says "but it would be great to have X" that in spirit might be tangentially relevant, but it's a whole ordeal of its own.
Diffing text, for example, requires a very different set of tools and techniques than what just a plain text editor would already have. That's why there are standalone products like Meld and the very good Beyond Compare; and they tend to be much better than a jack of all trades editor (at least I was never able to like more the diff UI in e.g. VSCode than the UI of Meld or the customization features of BC).
Same for other tangential stuff like VCS integration; VSCode has something in there, but any special purpose app is miles ahead in ease of use and features.
In the end, the creators of an editor need to spend so much time adding what amounts to suplemental and peripheral features, instead of focusing on the best possible core product. Expectations are so high that the sky is the limit. Everyone wants their own pet sub-feature ("when will it integrate a Pomodoro timer?").
This is a sharp observation, and it goes even further: BeyondCompare easily allows one to hop into an editor at a specific location, while Total Commander, with its side-by-side view of the world, is n excellent trampoline into BeyondCompare.
In this kind of ecosystem (where visual tools strive to achieve some Unix-like collaboration), the super power of editors (and IDEs) is their scripting language, and in this arena it is still hard to beat Emacs (with capabilities that were present maybe 40 years ago).
I don't even need that to be built into the editor – I would pay for a fast, standalone git UI that is as good as the IntelliJ one. I use Sublime Merge right now and it's kind of ok but definitely not on the same level
I mostly use git from the terminal, but the goodness of the IntelliJ UI for cherry-picking changes is one of the things that has me maintaining my toolbox subscription. Also, IdeaVim is a really solid vim implementation, IMO.
If they factor out the VCS UI into a standalone non-IDE product that starts and runs a little faster than their IDEs and doesn't care about your project setup I'd pay a subscription even
> I’m currently using a mix of Zed, Sublime, and VS Code.
Can you elaborate on when you use which editor?
I'd have imagined that there's value in learning and using one editor in-depth, instead of switching around based on use-case, so I'd love to learn more about your approach.
Different user, but I prefer to use different editors for:
- project work, i.e. GUI, multiple files, with LSP integration (zed)
- one-off/drive-by edits, i.e. terminal, small, fast, don't care much about features (vim)
- non-code writing, i. e. GUI, different theme (light), good markdown support (coteditor)
I don't like big complex software, so I stay away from IDEs; ideally, I'd like to drop zed for something simpler, without AI integration, but I haven't found anything that auto-formats as well.
My workflow isn't very common. I typically have 3-5 projects open on the local machines and 2 cloud instances - x86 and Arm. Each project has files in many programming languages (primarily C/C++/CUDA, Python, and Rust), and the average file is easily over 1'000 LOC, sometimes over 10'000 LOC.
VS Code glitches all the time, even when I keep most extensions disabled. A few times a day, I need to restart the program, as it just starts blinking/flickering. Diff views are also painfully slow. Zed handles my typical source files with ease, but lacks functionality. Sublime comes into play when I open huge codebases and multi-gigabyte dataset files.
in my case, I use zed for almost everything, and vscodium for three things:
search across all files; easier to navigate the results with the list of matching lines in the sidebar, and traversing the results with cursor up/down, giving full context
git; side-by-side diff, better handling of staging, and doesn't automatically word-wrap commit messages (I prefer doing that myself)
editing files which have a different type of indentation than what is configured in zed, since zed does not yet have autodetect
Not a fan of Windows either, but playing devil’s advocate here: Apple’s Finder has steadily gotten worse over the last ~16 years, at least in my experience. It increasingly struggles with basic functionality.
There seems to be a pattern where higher market cap correlates with worse ~~tech~~ fundamentals.
why would a company be incentivized to improve the user experience in ways that aren't profitable ? especially after watching the number one tech company literally worsen UX to increase profitability
I was just about to ask some friends about it. If I’m not mistaken, Postgres began using ICU for collation, but not string matching yet. Curious if someone here is working in that direction?
reply