HN2new | past | comments | ask | show | jobs | submitlogin

This looks like somebody re-releasing QWEN models to promote their own company. https://hackertimes.com/item?id=47217305 is the link to QWEN's repo.
 help



If you want to have a chance at running a large model, it needs to be quantized. The unsloth user on Huggingface manages popular quantizations for many models, Qwen included, and I think he developed dynamic GGUF quantization.

Take Qwen/Qwen3.5-35B-A3B for example. It's 72 GB. While unsloth/Qwen3.5-35B-A3B-GGUF has quantizations from 9-38 GB.


Unsloth is one of, if not the most well-known provider of model quantizations. The release post of course should reference the source, but most probably use unsloth or bartowski quantized models being my go-tos so relevant/convenient.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: