Hacker Timesnew | past | comments | ask | show | jobs | submit | more thanhhaimai's commentslogin

> I would prefer something that doesn't phone home and can work offline. Opensource firmware/software and repairability are important.

I built myself a Voron, and it's an amazing learning experience. I learn how things work, and the trade offs. I get to pick and replace the exact parts I want. I design my functional parts knowing exactly the printer's capability. There is something very fascinating about it. You can look at a print, and can tell different issues at a glance because you have seen (and fixed) them while you built and tuned the printer. The majority of 3D Printing quality issue are due to Hardware constructions / trade offs, and not Software (slicer settings..). Without building a printer from scratch, it's hard to tell the root cause.

https://vorondesign.com/voron2.4

- Fully open sourced

- Repairability and updatability. Lots of fun mods.

- No phone home / privacy issue like Bambu

I think before going down the rabbit hole, it's best to make sure you have a clear answer for this question: Do you care about the learning / tinkering / optimizing part, or do you care more about "it just works" printing?

- Many recommendations in this thread is for the "it just works" printing case. The top candidates are Bambu, Creality, and Eiegoo. The quality is good for most cases.

- If you're an engineer and into tinkering like me, you would be much happier with a Voron v2. Depending on your effort, you can match Bambu's quality, or _greatly_ exceed it.

Regarding Slicer, don't worry much about it. You can learn one very fast. The top ones are Cura and Orca Slicer. I use them both, and they have pros / cons. Personally on my Voron, a well tuned Cura profile yield better result. But Cura is missing one important feature: it can't limit the speed based on Flow Rate.

Another quick tip:

- Take the advertised number with a grant of salt. For example, many printers advertised 600 mm/s print speed. The mechanical frame may be able to handle 600 mm/s, but the Hot End is the limit of the build (e.g. it can't melt material fast enough, friction, the ability of extruder motor to quickly change speed, etc).

Hope you have a great time!


I love the idea of building a printer, but I know that my attention span is limited on these sorts of things. As in, I’ll be reliably obsessed until it’s done and tuned, and then I’ll forget everything until the next time I want to use it.

So my big question, for someone who’s owned one a while: is the printer ever “done”?

Is there a point after which it “just works”? Or is it always going to be more like “it’s great! I just need to tweak the blah blah setting every time and retighten the frobnitz every 3 prints, no big deal really!”

I always see the quote about “if you like printERS, build, but if you like printING, just buy one” - but nobody talks about the timescale on the fiddling and whether it ever stops.

(currently own a Prusa mk3s I built as a kit and it’s been pretty solid as a tool!)


I assembled a Prusa Mk3 a few years ago, and other than swapping a nozzle that wore out (they don't last forever) it's still working fine.

I grease it about twice a year, and clean any gunk from the nozzle (takes a few seconds) every so often. I wash the print bed thoroughly about 3 or 4 times a year.

I'm interested in 3D printing, but not interested in fiddling with the printer itself. So, I have fiddled to print soft rubber filament for example, but for every experiment with something strange like that I have 50 or more routine prints in PLA or PETG.


Yep this matches my experience pretty well! It’s been a good printer and I haven’t had to do much, and I avoid fiddling and mostly use it as a tool. I’m always eyeing the newer faster printers but I don’t love the cloudiness of Bambu and Prusa’s new stuff seems nice but not quite worth the leap yet. Voron has interested me for a while though.


At least for me, the printer is never "done", but it isn't in the "I just need to tweak the blah blah setting every time" sense.

Rather, there's just always something to make better even if the current solution is fine. Say, swapping out the toolhead for a lighter or more modular one, or building another MMU because it can handle softer materials that I print maybe once a year and don't have to use the MMU for, or replacing the hotend with a higher flow or sturider one, or adding more lighting and cameras, or switching the motor mounts to a double shear design to be able to dial accelerations up, etc. Right now I'm working on building a TradRack MMU while planning out a teardown and rebuild of my backup Voron0.2.

I could stop at any point and still have a printer that's near the top end of what's accessible on the market, but the open source 3d printer community moves incredibly fast and it's nice to be able to participate in it.


I avoid buying 2d printers because they are so maintenance heavy... I use a 3D printer in a shared makerspace, where whoever has the most avialability takes on maintenance issues when they arise.

The Prusa mk4's we use are extremely reliable; most problems come down to users doing dumb stuff... or at least, doing risky stuff and not monitoring the print.

I find that I usually have some /kind/ of print I'm making (say, very hollow terrain pieces for tabletop wargames) get my printer settings dialed in over the course of a few failed prints, and then can print more of that kind of thing very reliably. In other words, good printer settings are project dependent, but can usually transfer reliably across simlar projects. And then I don't have to think much about the printer - it just does its thing.


> So my big question, for someone who’s owned one a while: is the printer ever “done”?

The printer is never "done" :). But there are plenty check points where it's "pretty good".

For example, here is my rough timeline:

- I sourced the parts and built it. Took around 4 weekends.

- The initial tuning took a while (like a month). But this was very fun. I tried almost all the Slicers. I fixed constructions issues (square angle, deracking, belt tuning, ...). After this step, the machine becomes "good enough". I can print various parts in the house and I was satisfied with the quality.

- I started pushing for speed and redid many parts of the printer. I learned about various limitations (like Flow Rate is the real limit for speed). This phase last a long time for me (like a year). I ended up replacing like 75% of the printed parts with CNC parts. During this time, the printer is still online and printable.

- I didn't modded the printer much after that. I found my sweet spot between speed / quality. I want to mod it with a 120W Hot End heater to increase the Flow Rate (already bought it), but it's not quite a necessary thing. It's more for fun. The tinkering goes on as long as you feel it's fun. But I wouldn't say you _need_ to tinker to _keep_ it working.

> Is there a point after which it “just works”? Or is it always going to be more like “it’s great! I just need to tweak the blah blah setting every time and retighten the frobnitz every 3 prints, no big deal really!”

After the first tuning phase, the Voron was "just works" for me. Or at least, if there was any issue, I could immediately tell what went wrong. And no retightening needed so far except one time the printed feet cracked (that was the reason I switched to CNC aluminum parts).

Edit: I built a large Voron (350mm), and it is really _heavy_ (almost full metal in my case). That's why the printed feet cracked. Beside that, maintenance is almost zero. I don't even wash the spring steel bed. Just click print and walk away.


Thanks for your insight! That’s great, seems like it’s been a fun project but not like a REQUIRED ongoing project. My first printer was a cheap i3 clone and it was more in the “never works right” category and I was hoping to avoid going back to something like that experience haha.


I bought a Creality Ender 3 V2 a little over a year ago, spent 6 months printing things and fiddling with it and finally had it with how primitive it is.

If you have the time and patience for tinkering, the Voron is great. I built a Voron 2.4 and Voron Trident this year. Both printers are designed with automatic bed leveling as a base feature and I modded them right from the start to have automatic z offset calculation as well with a fancy probe. With these 2 features I have print it and forget it operation 99% of the time with no issues. There have also been some open source multi filament/material projects that you can add to the printers since you have full control of the hardware and software.


Seconding. Voron 2.4r2 350mm here.


This is the PR with the changes in case people missed it:

https://github.com/mieciu/tau2-bench/pull/1/files


That seems so strongly directed, that it feels like an attempt to reproduce a classic chat bot.


Thanks! I also updated the post with the link on the website.


Can one customer get the model to return the bill details for another customer?


Disclaimer: opinions are my own.

I'm using the Pixel Fold at the moment, and it's the best phone I've used to date. It's something I didn't know I want until I have it.

Quick review:

- The phone construction feels good on hand and in the pocket. The screen is beautiful.

- When folded, functionality-wise it's like previous Pixel (beside looking better with the metal edge). I spent about 75% of my phone time in this mode. Also no notch!

- When unfolded, you have access to much more screen real estate. I didn't realize how this dramatically improve reading documents / browsing the web. Things that were unusable (like opening Google Sheet) is now much more comfortable. You can also do split screen, where you keep 2 apps on at the same time (todo list + message)

- The weight feels solid. The fold mechanism is solid. Battery ~50% per day with no battery saving. Camera is good as usual.

Software:

- I've mentioned before on HN, the Spam Screening feature singlehanded keeps me in the Pixel ecosystem. No spam call at all.

- Android Auto is solid

- Gemini is a gentle surprise, especially with how it's easy to interact with the "current phone screen".

Review caveats:

- I don't game on the phone or any CPU intensive tasks. It's plenty fast for me so far.

- I don't use the speaker (only use bluetooth headphones)


Apart from the call screening feature rest are pretty much standard or even better across all android foldables. I


What’s special about spam screening on Pixel? I don’t get any spam calls or texts on my iPhone.


Depending on the information Google knows about the incoming phone call:

- If Google is confident the source is spam (e.g. known spam center). The call is blocked outright. It still has a log that a call from this has been blocked.

- If it only suspects spam, Google will answer the call using AI bot, something like "Hi, I'm Google Assistant on behalf of XYZ, what's the call for?". The phone shows that it's screening a phone call, but doesn't ring. Only after the caller gives the reason, and it passes the spam check, then it rings the phone. You can always pick up the call early if you recognize what they talks about (from the transcript)

- If it's known good source (contact list, doctors,...), then it rings directly.

So far, the rate of spam I got is 0, and it screens about 20 calls a month.


Given how many times Google has warned me that existing or potential clients phoning me are Spam, your second bullet there sounds terrifying.

I'd be turning that 'feature' off first.


Just for those reading, Nomo Max on iOS replicates this feature.


This feature is a part of iOS 26 by default. No third-party app needed and it uses Apple’s rich data set on the backend.

Settings -> Apps -> Phone -> Screen unknown callers


Works very well in iOS 18


How? I get multiple everyday. Is there a setting I need to enable?


As of iOS 26, iPhone also has call screening. It’s been working pretty well!


Android can screen calls, but normally your phone doesn't even ring, and if it does, it is listed as spam. They must have some kind of Google insider internet info that can recognize these types of calls.

This is big because I often receive calls from unknown numbers for work, and those get through. What sucks is that people tend to hang up if you screen the call, even if it is legit.


That's been the biggest upgrade for me on the beta. Loved that about Pixel phones but hated how locked down Pixels are for an Android phone. Honestly confused by Samsung hasn't rolled out something similar. There was talk of them doing it and Bixby has something similar but it's complete trash. Maybe Samsung owns a telco in Korea.


I love all the new AI improvements, but this is a _hard_ no for me.

Attack surface aside, it's possible that this AI thing might cancel a meeting with my CEO just so it can make time to schedule a social chat. At the moment, the benefits seem small, and the cost of a fallout is high.


I don't have this problem at all thanks to Pixel Phone. That spam screen feature alone is keeping me on the Android ecosystem. I don't recall one spam call in the last year. And legitimate new caller (not on my contact list) can still reach me after like 5 seconds with the bot.


I'd rather `ruff` being merged with `ty` instead. `uv` for me is about package / project manager. It's not about code style. The only time `uv` should edit a code file is to update its dependencies (PEP 723).

On the other hand, both `ruff` and `ty` are about code style. They both edit the code, either to format or fix typing / lint issues. They are good candidates to be merged.


To clarify, `ruff` and `uv` aren't being merged. They remain separate tools. This is more about providing a simpler experience for users that don't want to think about their formatter as a separate tool.

The analogy would be to Cargo: `cargo fmt` just runs `rustfmt`, but you can also run `rustfmt` separately if you want.


Thank you for writing software for all of us Python day-jobbers who wish we were writing Rust instead.


Never seen someone put my feeling so succinctly


You can advocate for using Rust at work.

If you're writing microservices, the Rust ecosystem sells itself at this point.


What Rust has over other languages that makes it better for writing microservices?


API-first or API-only backends are a sweet spot for today's Rust, IMO, and its resource footprint, reduced maintenance long-tail, and performance properties are all super competitive. It's especially hard to find another language that can compete with Rust on all three of those at once.


>reduced maintenance long-tail

I'd like to hear more about that. I'm also curious what makes Rust particularly suited to "API-first" backends. My understanding of the language is that it's concurrency primitives are excellent but difficult to learn and it's gc-less runtime.


> it's concurrency primitives are excellent but difficult to learn

They're actually incredibly easy to learn if your software paradigm is the request-response flow.

The borrow checker might kill your productivity if you're writing large, connected, multi-threaded data structures, but that simply isn't the nature of 90% of services.

If you want to keep around global state (db connectors, in-memory caches, etc.) Arc<Mutex<T>> is a simple recipe that works for most shared objects. It's dead simple.

You can think of Rust with Axum/Actix as a drop-in replacement for Go or Python/Flask. With the added benefits of (1) no GC / bare metal performance, (2) much lower defect rate as a consequence of the language ergonomics, (3) run it and forget it - no GC tuning, very simple scaling.

Rust has effectively made writing with the performance of C++ feel like writing Ruby, but with unparalleled low defect rates and safety on account of the type system.


>Rust has effectively made writing with the performance of C++ feel like writing Ruby, but with unparalleled low defect rates and safety on account of the type system.

This is a little overblown.. speaking VERY HAND-WAVILY, sea_orm < active record by a factor of about 10x more mental overhead but is at least that much more performant...

but yea, vibe-coding rust micro services is pretty amazing lately, almost no interactions with borrow checker, and I'm even using cucumber specs...


You're right on that front.

I currently wouldn't recommend any Rust ORM, Diesel included. They're just not quite ready for prime time.

If you're not one to shy away from raw SQL, then SQLx is rock-solid. I actually prefer it over ORMs in general. It's type-checked at runtime or compile time against your schema, no matter how complex the query gets. It manages to do this with an incredibly simple design.

It's like an even nicer version of Java's popular jOOQ framework, which I always felt was incredibly ugly.

SQLx might be my very favorite SQL library of any language.


I will give it another look, thanks!


Isn’t there `uv tool run ruff` already for this? Or `uv run ruff` if it’s a proper dependency? I’m not sure what’s the point of a special shortcut command, unless there are plans to make it flexible so it’ll be an abstraction over formatters (unifying ruff, black, etc).


Yeah, you can definitely use `uvx ruff` (an alias for `uv tool run ruff`) to invoke Ruff. That's what I've done in my own projects historically.

The goal here is to see if users like a more streamlined experience with an opinionated default, like you have in Rust or Go: install uv, use `uv init` to create a project, use `uv run` to run your code, `uv format` to format it, etc. Maybe they won't like it! TBD.

(Ruff is installed when you invoke `uv format`, rather than bundled with the uv binary, so if you never use `uv format`, there aren't any material downsides to the experiment.)


> (Ruff is installed when you invoke `uv format`, rather than bundled with the uv binary, so if you never use `uv format`, there aren't any material downsides to the experiment.)

That’s thoughtful design and could be worth mentioning in the blog post.


Would you ever consider bundling ruff binaries with uv releases similar to uvx and uvw? It would benefit offline users and keep compatible uv/ruff versions in sync.

Perhaps even better… cargo-like commands such as uv check, uv doc, and uv test could subsume ruff, ty, and other tools that we haven’t seen yet ;)

A pyup command that installs python-build-standalone, uv, python docs, etc. would be totally clutch, as would standalone installers [0] that bundle it all together.

[0] https://forge.rust-lang.org/infra/other-installation-methods...


It's part of the mission for uv to become "cargo for python". A one stop swiss-army knife for everything you need to manage a Python project. I think it'll get a `uv test` command at some point too.

The whole point is you just install `uv` and stop thinking about the pantheon of tools.


It'll be interesting to see how the test is done. At the tox level, or the pytest level? (Or another level?) I can see all being useful and ergonomic, but tox's multi-environment setup might fit into it really well.


Is `uv format` supposed to be an alias for `ruff check`?

Stupidly I ran `uv format` without `--check` (no harm done and I can `git diff` it) so I didn't see the changes it made but `ruff check` does still show things that can be fixed with `ruff check --fix`. If I'm guessing correctly the difference is coming down to the fact that I have (in my submodule where all changes were made) a pyproject.toml file with ruff rules (there's also a .flake8 file. Repo is being converted). Either way, I find this a bit confusing userside. Not sure what to expect.

I think one thing I would like is that by default `uv format` spits out what files were changed like `uv format --check` does (s/Would reformat/Reformatted/g). Fine for the actual changes not to be displayed but I think this could help with error reduction. Running it again I can see it knows 68 files were changed. Where is that information being stored? It's pretty hard to grep out a number like that (`grep -R \<68\>`) and there's a lot of candidates (honestly there's nothing that looks like a good candidate).

Also, there's a `--quiet` flag, but the output is already pretty quiet. As far as I can tell the only difference is that quiet suppresses the warning (does `--quiet` also suppress errors?)

  uv format
  warning: `uv format` is experimental and may change without warning. Pass `--preview-features format` to disable this warning.
  36 files reformatted, 31 files left unchanged

  uv format --quiet
  36 files reformatted, 31 files left unchanged
I like the result for `--quiet` but I have a strong preference that `uv format` match the verbosity of `uv format --check`. I can always throw information away but not recover. I have a strong bias that it is better to fail by displaying too much information than fail by displaying too little. The latter failure mode is more harmful as the former is much more easily addressed by existing tools. If you're taking votes, that's mine.

Anyways, looking forward to seeing how this matures. Loved everything so far!


> Is `uv format` supposed to be an alias for `ruff check`?

I'd imagine not, since `ruff format` and `ruff check` are separate things too.


That makes some more sense. I think I just misunderstood what Charlie was saying above.

But I'll also add another suggestion/ask. I think this could be improved

  $ ruff format --help
  Run the Ruff formatter on the given files or directories
  $ uv format --help
  Format Python code in the project
I think just a little more can go a long way. When --help is the docs instead of man I think there needs to be a bit more description. Just something like this tells users a lot more

  $ ruff format --help
  Formats the specified files. Acts as a drop-in replacement for black. 
  $ uv format --help
  Experimental uv formatting. Alias to `ruff format`
I think man/help pages are underappreciated. I know I'm not the only one that discovers new capabilities by reading them. Or even the double tab because I can't remember the flag name but see something I didn't notice before. Or maybe I did notice before but since the tool was new I focused on main features first. Having the ability to read enough information to figure out what these things do then and there really speeds up usage. When the help lines don't say much I often never explore them (unless there's some gut feeling). I know the browser exists, but when I'm using the tool I'm not in the browser.


> To clarify, `ruff` and `uv` aren't being merged.

ruff at least seems to be compiled into uv, as the format worked here without a local ruff. This is significant more than just an interface. Whether they are managed and developed as separate tools doesn't matter.

> This is more about providing a simpler experience for users that don't want to think about their formatter as a separate tool.

Then build a separate interface, some script/binary acting as a unified interface, maybe with its separate distribution of all tools. Pushing it into uv is just adding a burden to those who don't want this.

uv and ruff are poor names anyway, this could be used to at least introduce a good name for this everything-python-tool they seem to aim for.


ruff is not compiled into uv; it's bootstrapped from an independent build, much like how `cargo fmt` is bootstrapped from a separate toolchain component (rustfmt). You can see how that works in the PR[1]. Importantly, that means that you don't experience any build-, install-, or run-time burden if you don't use this subcommand.

[1]: https://github.com/astral-sh/uv/pull/15017


This is cool. Is there a way to call ruff’s linter? Like `uv lint`, which would call `ruff check`.

To your analogy, it’d be like `cargo clippy`


You can always use `uvx ruff check` or `uv tool run ruff check`. Though honestly I find just running `ruff check` much easier.


uv ruffy sounds funny


Does it have the capability to use a different formatter than ruff?


This is about providing an opinionated default. uv will still support installing and runing any formater as before.


They are mimicking Rust's cargo, which has `cargo fmt`


> They are mimicking Rust's cargo

Cargo cargo cult?


It's not a cargo cult if it actually works.


Also `go fmt` and `dart format`.


Doesn't cargo just have a subcommand plugin system? Or is fmt actually hard-coded into the cargo code?

I prefer the plugin system. I don't like god programs like what the npm monstrosity became.


cargo has an external subcommand system, but it also has "blessed" (my word choice) external subcommands that are typically bootstrapped via Rust toolchain components. This makes them pretty analogous to what uv does here with `uv format`, in my opinion.


I think the goal is to make uv a complete package manager for Python while still giving you the option to use the parts separately.

uv is like cargo for python.

If you only need a fast type checker you can just use ty, if you just need a fast formatter and linter you can just use ruff.

Combining ruff and ty doesn't make sense if you think about like this.


Including a formatter in a package manager doesn't make sense to me. Seems like obvious feature creep.

My understanding was that uv is for installing dependencies (e.g. like pip) with the added benefit of also installing/managing python interpreters (which can be reasonably thought of as a dependency). This makes sense. Adding more stuff doesn't make sense.


GP should have written project manager not package.

Think npm / go / cargo, not apt/yum/pip.


Doesn't make it less feature creep.


One of the maintainers said in another comment that it will download the formatter (ruff) and it is not embedded. So if you don't use that feature you won't even notice: https://hackertimes.com/item?id=44978660


I’m sure you’re a old man on the verge of death who loves yelling at clouds but enforcement and application of consistent code formatting has been considered a basic part of project management for a while now. Recent langage provide it as part of core project management tooling.

Given uv is openly strongly inspired by cargo and astral also has tooling for code formatting, the integration was never a question of “if”.


I remember how in a previous job the code formatter cost me time and time again. I already intentionally format my code as it makes sense and with the goal of improving readability. Then the damn auto formatter comes along and destroys this, by splitting a log call over 5 lines, because it has seen, that the log call is longer than 80 characters. Thank you for wasting 5 LoC of screen space for something that is a sidenote basically. That'll surely improve readability. So what do people do? They increase line length to 200 characters, to avoid this shit happening. Only that now it does no longer break long lines that should be broken. Unless I added trailing comma everywhere, wasting more time to make the formatter behave properly.

I am not against auto formatters in general, but they need to be flexible and semantically aware. A log call is not the same as other calls in significance. If the auto formatter is too silly to do that, then I prefer no auto formatter at all and keep my code well formatted myself, which I do anyway while I am writing the code. I do it for my own sake and for anyone who comes along later. My formatting is already pretty much standard.


It’s not a package manager. It’s a project manager.


Doing a lot of Rust, there is one huge benefit of having cargo handle rustfmt: it knows the fileset you're talking about. It will not blindly format all rust files in the cwd, rather the "current" crate (current having the same definition as cargo!).

Translating this to uv, this will streamline having multiple python packages in the same directory/git repo, and leave e.g. vendored dependencies alone.

Also, since their goal really is "making cargo for python", it will likely support package-scoped ruff config files, instead of begin file- or directory-based.


i think it's good to let them experiment! cargo (and go?) offers this already, so why not.


But what if `ty` was also eventually merged into `uv` as well? 8-)

That's probably the vision, given all from astral.sh, but `ty` isn't ready yet.


Oh please no...

The reality is, ecosystems evolve. First, we had mypy. Then more type checkers came out: pyre, pyright, etc. Then basedpyright. The era of rust arrived and now we have `ty` and `pyrefly` being worked on heavily.

On the linter side we saw flake8, black, and then ruff.

Decoupling makes adapting to evolution much easier. As long as both continue to offer LSP integrations it allows engineers to pick and chose what's best.


The whole premise of uv that you don't need to know that you can install specific python version using eg pyenv (`uv python install` or `uv run` may do it implicitly), you don't need to know about `python -m venv`/virtualenv (`uv venv`), or how to create lock files pip-tools / pipenv / poetry / etc(`uv lock`), or pipx (`uv tool install`) or `pip install`/ `pipenv install`/`poetry add` / many others (`uv add`), or how to build artifacts setuptools / hatchling / poetry way / etc (`uv build`). Other commands such as `uv sync` didn't break new ground too.

`uv format` is similar (you don't need to know about `ruff format` / black / yapf ).


All actions listed in your first paragraph, except for installing specific Python versions, are actions related to the area of packaging. Doing it in one tool is completely sensible. I'm not a fan of uv managing Pythons, but I guess that ship has sailed.

But formatting code is a completely new area that does not fit uv.


This is the direction I expected things to go, and not something I'm especially fond of. I'll stick with UNIX-philosophy tools, thanks.


this is very much in line with the unix philosophy - it delegates formatting to ruff and simply provides a unified front end that calls out to the right specialized tool. think of it as a makefile.


I don't think this is an apt (pun intended?) comparison at all.


One can find repos using `make format` / `make lint`/ `make typecheck` / or similar

I remember David Beazley mentioning that code with Makefiles were relatively easier to analyze based on ~Terabyte of C++ code and no internet connection (pycon 2014) https://youtube.com/watch?v=RZ4Sn-Y7AP8


That `make format` command was not defined by the Make developers, but by the team using Make in their project. They picked their favorite formatter and defined a shortcut. In this case, the uv developers are forcing the command on everyone, and they're using it to cross-promote their own formatting tool.


They are not forcing anything on anyone. You can decide to never run `uv format` and ruff won’t even be installed.

You can use uv without ruff. You can use ruff without uv. You can invoke ruff yourself if that’s what you want. Or use any other formatter.

I don’t think I understand what your complaint is.


Other people having an opinion and creating their own software project to implement it is not “forcing” anyone to do anything.

The inverse would be no one is allowed to create any projects that you don’t personally agree with.


Nobody is forcing you to use anything. Feel free to ignore it and use whatever flavor you like.


A better example might be: in good ol' days when we were formatting with troff(1), passing arguments to the command line invoked other programs like eqn(1) and tbl(1).


If I want to call ruff, I can do so myself. Why should I want to call it through uv?


If you want to call ruff directly, this doesn't change anything. It's a purely optional feature.

However, to answer the question generally: people want this for the same reason that most people call `cargo fmt` instead of running rustfmt[1] directly: it's a better developer experience, particularly if you don't already think of code formatting as an XY-type problem ("I want to format my code, and now I have to discover a formatter" versus "I want to format my code, and my tool already has that").

[1]: https://github.com/rust-lang/rustfmt


Some of us prefer well packaged tool that does everything instead of stitching together bazillions of dependencies.


Or maybe some prefer random versions of dependencies being downloaded and running over our code?


There is wisdom in knowing when -- and how -- to break standards. Don't know if this is the case, but I think it is. If introducing fmt powers to UV meant it had to consider tradeoffs elsewhere where it might hurt its quality somehow then maybe, but in this case UV is more like an umbrella, unifying the interface for pip, venv, builds... And now fmt. All keeping each separate domain isolated without details leaking to one another.


What do I gain from adding 'uv' to the start of each of these commands, as opposed to having them all just be separate commands?


Abstraction. Not having to know all the innards (or even names) of each until you want to. It's all there if you want to, but stuff like uv (or cargo, or go's toolset) greatly simplifies 3 scenarios in particular: starting a new project, joining an existing project, and learning Python for the first time.

All 3 scenarios benefit from removing the choice of build tool, package manager, venv manager, formatter, linter, etc., and saying, "here, use this and get on with your life".


How is "uv format" a better name, or more "abstract", etc. etc., than "ruff check"? Why is it easier to think of my formatter and package manager (or whatever other pieces) as being conceptually the same tool, given that they are doing clearly different, independent and unrelated things?

And why is any of this relevant to first-time Python learners? (It's already a lot to ask that they have to understand version control at the same time that they're learning specific language syntax along with the general concept of a programming language....)


It’s an abstraction because it literally hides knowledge in service of presenting a more a more cohesive API to the human.

It requires less knowledge at the front end, which is when people are being bombarded with a ton of new things to learn.

Learners don’t have to even know what ruff is immediately. They just know that when they add “format” to the command they already know, uv, their code is formatted. At some later date when they know Python better and have more opinions, they can look into how and why that’s accomplished, but until then they can focus on learning Python.

uv isn’t a package manager only, its best thought of as a project manager, just like go or cargo. Its “one thing” is managing your Python project.


Would Linux similarly be better if we wrote e.g. "cu list" instead of "ls", "cu change" instead of "cd", etc.? (The "cu" stands for "coreutils", of course.) Because it seems to me like the same arguments apply. I was already thinking of uv as a "project manager" and I understand that intended scope, and even respect the undertaking. My point is that I don't believe that labeling all the tasks under that scope like this actually improves the UX.

Maybe I'm wrong about that. But I don't know that it can actually be A/B tested fairly, given network effects (people teaching each other or proselytizing to each other about the new way).


I don't think Linux would be better with a `cu` prefix for coreutils, but I do think git would be worse without a `git` prefix. I think it's ultimately a question of user expectations, and I think user expectations around packaging tooling in particular have shifted towards the Go and Rust styles of providing a "namespace" tool that provides a single verb-style interface for developer actions.


the meaning of the word "ruff" has nothing to do with formatting. Therefore it's harder to remember than "format". if they could just call the formatter "format", that would be best, but obviously that name is too overloaded. So they namespace it under a tool people already know, "uv".

Let's imagine you're learning a new language, which has tools with names that I just made up. Which has a clearer pattern to you?

    1. Check code with  `blargle check`
    2. Format code with `blargle format`
    3. Run code with    `blargle run`
Or

    1. Check code with  `zoop`
    2. Format code with `chungus`
    3. Run code with    `kachow`
Clearly, the first is easier to remember. The benefit of namespacing is that you only need to remember one weird/unique name, and then everything under that can have normal names. If you don't have namespacing, you need weird/unique names for every different tool.


Well for one thing separate commands that are as good as what uv does don’t exist


I also don't know what I'd gain, but it doesn't mean there isn't practical use for someone else.

But most importantly, apart from breaking away from "UNIX-philosophy tools", what do you lose in practical terms?


The spirit of the unix philosophy is not implementing MxN use cases separately. Running the same program as a separate binary or as a subcommand has nothing to do with it


I mean, Go was designed by one of the authors of UNIX, and that has very much batteries-included tooling.


So UNIXy that he didn't even like long options (--option) in the standard flag library.


Long options are more of a GNU thing and GNU's Not Unix.


And I would think the next logical step here is to have a `uv lint` option here that runs ˋty` under the hood ?

I would love to see a world where there is a single or a set of standard commands that would prepare your python project (format, lint, test, publish). Maybe that’s the vision here?


It's interesting you said that. My experience is the opposite. In my last 10 years in California, I've had power outages a couple times a year (mostly due to storm / trees falling on the electrical lines). But I don't recall a time I got water cut off.


For some reason, I see this style of "everything lowercase" more often recently. It distracts me from the content a lot. Was there a reason this style has become more popular?


It's the writing pattern of younger Gen Z and older Gen Alpha. Combined with minimal-to-no punctuation, it can be very distracting to read. However, this is typically used in informal settings like group chats. It is very odd to see it in formal, serious, or professional settings outside of, again, DMs and the like.

I personally only made it about thirty seconds before I had to stop reading the article. The excessive line breaks and paragraphs-that-are-just-run-on-sentences on top of sporadic sentence casing and missing punctuation presents the writer as partly illiterate. I just can't shake the feeling that I'm reading the words of an, for lack of a graceful term, idiot.


Sometimes there's some sort of JS/CSS rendering trick that has broken on the page. However in this case words like "I'll" still contain their capital.

In this context (not an immediate interactive chat) I find these issues awkward and disrespectful to the reader, the same as if the piece was filled with typos.

When the author treats a post as worthless throwaway text which isn't worth fixing even when they have plenty of opportunity... Why would that be worth the time of a stranger to read?


I agree — yet here we are on the front page of HN.


To twist a quote: The attention economy can remain irrational longer than I can remain sentient. :p


honestly the worst is when it’s all lowercase AND there are lots of typos.


My theory: it's a response to the all caps internet stuff which is shouting on the internet, think of it as an attempt to do ASMR for text.


I've seen the argument made that "when people talk and text, they don't use punctuation." This seems like nonsense to me, since when people talk they inflect in rather varied ways, and there is punctuation in texts every time the "send" button is hit. 2300 years ago, the Greeks didn't have punctuation, and they had to invent it!


The responses of some folks on this thread reminds me of this:

https://xkcd.com/1172/


That's more a joke about people coming to rely on any observable behavior of something, no matter how buggy or unintentional.

Here's we're talking about killing off XSLT used in the intended, documented, standard way.


> Imagen 4 Ultra: When your creative vision demands the highest level of detail and strict adherence to your prompts, Imagen 4 Ultra delivers highly-aligned results.

It seems that you may need the "Ultra" version if you want strict prompt adherence.

It's an interesting strategy. Personally, I notice that most of the times I actually don't need strict prompt adherence for image generation. If it looks nice, I'll accept it. If it doesn't, I'll click generate again. For creativity task, following the prompt too strictly might not be the outcome the users want.


I've found this is an interesting balance with Copilot specifically. Like, on the one hand I'm glad it aims for the bare minimum and doesn't try to refactor my whole codebase on every shot... at the same time, there's certain obvious things where I wish it was able to think a bit bigger picture, or even engage me interactively, like "hey, I can do a self-contained implementation here, but it's a bit gross; it looks like adding dependency X to the project keeps this a one liner— which way should it go?"


Give me a 'precision' slider then. On one end it should do precisely what you asked, to a T, even if what you asked for is dumb, and on the other end it should try to capture the spirit of what you wanted plus any obvious oversights.


I’ve had good experience with iterative prompting when generating images with Gemini (idk which model — it’s whatever we get with our enterprise subscription at work, presumably the latest.) It’s noticeably better than ChatGPT at incorporating its previous image attempt into my instructions to generate the next iteration.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: