The packed simd extension isn't ratified and mostly aimed at embedded processors that need to do digital signal processing.
The vector extension (rvv) is what's important for desktop and server chips, and it is already ratified.
ffmpeg for example already has some things ported to rvv [0], and many projects are already working on support.
The problem is that there currently isn't any consumer hardware that supports the ratified vector extension. Sipeed announced [1] that they are working on a RVA22+V soc based on the C908 chip, this will likely be the first one developers can get their hands on.
To contextualize this – because I had no idea if those 13 files constituted "a lot", "very little", or anything in between – here's a comparison of amount of asm code for each platform:
This is a very rough comparison and doesn't take in to account the C code for every platform (which seems to scale roughly but not exactly the same), programming style, architecture differences, etc. but it should be roughly accurate.
This is the real issue with adopting new architectures, imho. So many packages claim to support arm, for example, whilst not enabling the proper optimisation flags, rendering it unusable. A loooot of work we take for granted has been put to optimize things for x86.
ToT llvm autovectorisation for riscv "V" 1.0 is actually really rather decent. Not as good as handcoding, natch, but can get to within a factor of two or better especially on workloads which are mostly simple tiny loops over data that's contiguous or has constant stride, as in h264 encoding.
Of course, the question is somewhat moot until we have development hardware available to buy that supports the ratified 1.0 version of the vector extensions. So, another year or so I'd guess.
I thought I had read somewhere that the Packed SIMD (P) extension was still going to be developed for implementations that don't need the full-blown Vector (V) extension, for example, embedded processors for digital signal processing.
RISC-V by design and be ambition is for everything, serious compute and embedded. Its just that its far easier to release embedded chips and that standard for that was ready far earlier.
Serious compute simply takes far longer in terms of product cycle and the relevant standard came far later.
A lot of use for RISC-V is also not in places that is target at end consumer devices.
There is not much yet in terms of RISC-V PC? The HiFive Pro P550 should be coming out at some point. Or the Unmatched currently.
Well, of course I don't have performance numbers or anything.
In general, the RISC-V Vector Unit has very much been designed to make those workloads faster. Many new AI/ML companies are using RISC-V and Vectors but usually with some of their own non-standard stuff.
Esperanto has Vectors with TPU. TensTorrent has done a lot of work on software to use their chip as a dataflow engine. And so on and so on.
So essentially, Nvida is making a huge amount of money and all those companies will want a piece of it.
I would say RISC-V Vector is a better design then x86 SIMD.
RISC-V is also not done, there are efforts to standardize things like matrix and other things used for AI/ML workloads.
At least 4 companies are working on server-class chips, targeting "Xeon-like" performance. Of course without actual hardware to test you should take those claims with a grain of salt for the moment.
If you want a cheap, low-end single board computer you can use right now, then I'd go for either the Vision Five 2 or the Sipeed Lichee Pi 4A. Performance won't be that great, but it'll run Fedora, Debian, Ubuntu or Yocto distros pretty much out of the box. I'm working on a Hifive Unmatched vs VF2 vs Sipeed vs Raspberry Pi 4B performance comparison right now, which will appear on https://rwmj.wordpress.com
https://milkv.io/pioneer seems to be what you are looking for, but I'd only recommend it for developers who are working on getting things like gpu drivers ported.
For most development smaller SBCs are enough, and I'd recommend waiting for RVA22/23 profile supporting hardware before spending a lot of money, because most current chips use the non standard vector isa and don't support all profile extensions, because they where created before everything was ratified.
ARM had decades to get into the server market and they are still struggling. Sure AWS' server instances are reasonably good but competitive RISC-V server hardware will be coming out over the next five years and then it is over for ARM.
I personally thought that Microsoft was going to abandon x86 for ARM in 2017 because it means they can lock the platform down even further with applications only being installable from their proprietary store and the bootloader being locked to windows only. Six years later nothing happened. ARM is on its deathbed.
I think there are a few reasons why there's no point currently:
* There aren't really any available for a reasonable price.
* A lot of the desktop related stuff is still in the process of being specified and ratified. If you buy something now it will probably use a load of ad-hoc features especially for booting.
* It doesn't really make any difference to running software, unless you are writing assembly. If you just want to play around with RISC-V assembly an emulator or cheap embedded hardware is a lot better option.
Sounds exciting. I haven’t figured yet how I can make use of riscv but my hands are scratching and I’m an avid debian user.
Anyone here does interesting hobby projects with riscv hardware?
We are doing cross-compilation, and all the targets can be built on x86_64 or aarch64 servers or laptops/desktops; in the CI, we are using x86_64 machines in AWS.
It's interesting that the project officially supports riscv64 but other architectures in a similar ecosystem niche, but vastly larger installed base, like SH4, have been unofficial ports for decades. Perhaps this is a good comparative example of how to effectively engage the community, or fail to.
There may be more hardware out there with SH4 processors, like some NAS boxes, but are they using or contributing to Debian? And when I look at a list of hardware with SuperH processors, half of the links are broken links: https://elinux.org/Processors#SuperH
SuperH seems like a niche architecture mostly supported by a single vendor and available in a few devices mostly in the Japanese market, without a huge amount of use in the Debian community.
RISC-V may not yet have as much extant hardware available, but there's tons of active development, and the open licensing model means that a lot of companies are working on RISC-V chips, which will then flow into hardware.
I'm not suggesting that SuperH would become an official port today when it is clearly dead. But the question is why it was never an official port even when it was in its prime. As for the installed base, it is absolutely terrifyingly large. They had a lot of wins in automotive, and popular consumer products like Sonos are based on it, or were for the first decade of that product line, and those products also run Linux.
Even in its prime, it simply didn't have enough interest from Debian developers for maintaining an official port.
With each release there are roll-call emails to help judge whether a port has sufficient maintenance to be included in the official release, and I generally see a single developer for SuperH/SH4: https://lists.debian.org/debian-devel/2013/10/msg00134.html
Debian doesn't sit down and do market research on amount of hardware shipped, and then allocate developers and resources based on that; they base whether there's an official port on whether there are already enough developers and infrastructure to maintain it, as builds failing on that port can block work on packages.
I am one of these DDs and also official kernel maintainer of the SuperH port. I am not responding to such roll-call messages because there is no new SuperH hardware on the horizon at the moment except for the stalled J-Core project.
Well, the current RISC-V buildds also take days to build GCC. On the other hand, the 64-bit SPARC machines have been among the fastest in Debian, yet there is no official port.
> So the answer to "why wasn't SH4 an official port" is likely "there simply weren't enough developers and infrastructure for supporting it."
I would argue that the main reason is that Renesas simply abandoned the architecture and replaced it in their own applications with ARM.
The prospective 64-bit SuperH port was dropped as well.
Yeah, I was responding to the "why wasn't it ever official even when it was reasonably popular", not why it's not now. Those threads I linked to were from 10 years ago or more.
I tried looking for any direct discussion in the past of making it an official architecture, and didn't find it, but I did find that it seems to have been slightly lower activity, and with some hurdles, and that there didn't seem to be much of a push to make it an official Debian architecture, while there's been active effort to make RISC-V an official architecture, that's been going on for several years.
Anyhow, I'm not a DD or DM, I've made a few contributions via maintainers, so you're clearly more involved than I am. I was just trying to shed some light on the question of why SuperH never was official, and as far as I can tell it's because no one ever put enough effort in to make it official, while they have done so for RISC-V. But maybe I missed some drama somewhere.
> I'm not suggesting that SuperH would become an official port today when it is clearly dead. But the question is why it was never an official port even when it was in its prime.
Because Renesas dropped the architecture and stopped paying developers maintaining it upstream.
> As for the installed base, it is absolutely terrifyingly large. They had a lot of wins in automotive, and popular consumer products like Sonos are based on it, or were for the first decade of that product line, and those products also run Linux.
That's true. But these systems were developed before Renesas stopped supporting the architecture.
I can't speak for why SH4 was always unofficial, but from a vibes perspective today, RISC-V feels like an architecture on the up. That probably helps attract more developers, and ultimately it takes people to do the work.
Why are they "rebootstapping"? Or does that mean they're just recompiling all the .debs again, but using the existing environment? (For context I've bootstrapped Fedora on riscv64 from scratch three times, and I wouldn't be keen to ever have to do it again.)
The rebootstrap is done because unofficial ports build servers usually are not owned or run by the Debian project, but by the porting teams themselves, who could be and often are random people who are not yet Debian members. So we don't necessarily want to use all of those .debs because they might not be rebuildable, or they might have artefacts from whatever hacky bootstrap process the first build used. The bootstrap process is a QA strategy to have more trust in the resulting binary packages.
Longer term we want all new ports to be cross-built from an existing architecture, but we don't have cross-buildds yet. The rebootstrap tool is a work-in-progress QA tool for making cross-building bootstrap for new arches a viable strategy.
Even longer term, we should do a bootstrap of all architectures from scratch using the approach of Guix, Bootstrappable Builds and Reproducible Builds; start with ~500 bytes of hand-written machine code plus all the source code and build enough to get a build server up, then go from there.
As RISC-V is now official, it needs to be rebuilt with the infrastructure that builds all official architectures.
There should be no issues, as the unofficial port is already building over 95% of the Debian package library. Note that Debian is the biggest distribution, so this is no small feat.
In a couple weeks, it should all be sorted, with the official port built to the same level the unofficial port was.
By built separately, they meant unofficial ports aren't built by Debian, but by random machines hosted in random places by members of the port team, who often aren't Debian members yet.
The rebootstrap being referred to here is not Helmut's automated rebootstrap tool, but about the fairly manual process of rebuilding an existing unofficial port as on official port, the steps for this are on the new port docs wiki page: