To be fair, early wine (when I first tried it) wasn't very usable, and for gaming specifically. So if you were an early enthusiast adopter, you might've just experienced their growing pains.
Also, I assume some Windows version jumps didn't make things easy for Wine either lol
The hype/performance mismatch was significant in the 2000s for Wine. I’m not sure if there was any actual use case aside from running obscure business software.
Yes, there was “the list” but there was no context and it was hard to replicate settings.
I think everyone tried running a contemporary version of Office and Photoshop, saw the installer spit out cryptic messages and just gave up. Enough time has passed with enough work done, and Wine now supports/getting to support the software we wanted all along.
Also, does anyone remember the rumours that OS X was going to run Windows applications?
I used WINE a lot in the 2000s, mostly for gaming. It was often pretty usable, but you often needed some hacky patches not suitable for inclusion in mainline. I played back then with Cedega and later CrossOver Games, but the games I played the most also had Mac ports so they had working OpenGL renderers.
My first memorable foray into Linux packaging was creating proper Ubuntu packages for builds of WINE that carried compatibility and performance patches for running Warcraft III and World of Warcraft.
Nowadays Proton is the distribution that includes such hacks where necessary, and there are lots of good options for managing per-game WINEPREFIXes including Wine itself. A lot of the UX around it has improved, and DirectX support has gotten really, really good.
But for me at least, WINE was genuinely useful as well as technically impressive even back then.
I remember it being surprisingly decent for games back then. Then a lot of games moved to Steam, which made it way harder to run them in Wine. Of course there was later Proton for that, but not on Mac.
Games are one of the easier things to emulate since gaming mechanics are usually entirely a compute problem (and thus not super reliant on kernel APIs / system libraries). Most games contain the logic for their entire world and their UI. The main interface is via graphics APIs, which are better standardized and described, since they are attempting to expose GPU features.
I worked on many improvements to wine's Direct3d layers over a decade ago... it's shockingly "simple" to understand what's happening -- it's mostly a direct translation.
Also these apps changed, A lot of windows programs were simple executables and I remenber for a moment it was very popular for developers to write portable apps that were just a .exe that you ran,also excel and other programs worked fine, but then microsoft and others started to use msxis or whatever it's called and more complex executable files and it was not longer posible, and microsoft and adobe switched to a subscription based system.
Most of the transfors you describe are still unfortunately destructive (ie the only way to go back is to undo). I'm not an expert on this, but I think the only way this could be key framed would be to take snapshots of the pixels and insert the modified raster data as keyframes? I'm not sure there's a good/correct/obviously way to interpolate betweens say a before and after liquefy operation the way it currently works. Maybe some of them coul store brush+inputs (pressure, cursor movement, etc) but that seems difficult to work with as an artist. Again, not done much animation (as a dev or artist) so maybe I'm just out of the loop completely
But yeah I agree with you in principle though, it would be nice if these were non-destructive and could be keyframed.
They are all non-destructive in Krita. Just use a transform mask and go to tool options, select liquefy and after you liquefy however you want you can just hide the transform mask and it stops liquefying the layer.
Yes, Krita has had this feature for years. Non-destructive filters (adjustment layers), too.
GIMP still doesn't have it. Only in 3.0 it got adjustment layers for filters.
Oh, this is news to me! I've used Krita to pain (recreational noob, not on a professional level) and I never realised this. I'll play with this tomorrow
No horse in this race, but your phrasing seems a bit weird, honestly... If reduced, your comments read as:
"You don't know about X? Well, at least I know about X and Y..." Doesn't seemed like a good faith comment to me either?
And then you say "You misunderstood my intentions so I'm going to disengage". For what it's worth, I didn't interpret your argument as insulting someone, but also it wasn't a useful or productive comment either.
What did you hope to achieve with your comments? Was it simply to state how you know something the other person doesn't? What purpose do you think that serves here?
If AI writes a for loop the same way you would... Does it automatically mean the code is bad because you—or someone you approve of—didn't write it? What is the actual argument being made here? All code has trade offs, does AI make a bad cost/benefit analysis? Hell yeah it does. Do humans make the same mistake? I can tell you for certain they do, because at least half of my career was spent fixing those mistakes... Before there ever was an LLM in sight. So again... What's the argument here? AI can produce more code, so like more possibility for fuck up? Well, don't vibe code with "approve everything" like what are we even talking about? It's not the tool it's the users, and as with any tool theres going to be misuse, especially new and emerging ones lol
I don't know why you have to qualify your sentence with "think carefully before you respond" it makes it seem like you're setting up some rhetoric trap... But I'll assume it's in good faith? Anyway...
I don't mind if a review is AI-assisted. I've always been a fan of the whole "human in the loop" concept in general. Maybe the AI helps them catch something they'd normally miss or gloss over. Everyone tends to have different priorities when reviewing PRs, and it's not like humans don't have lapses in judgement either (I'm not trying to anthropomorphise AI, but you know what I mean).
My stance is same about writing code. I honestly don't mind if the code was written `ed` on a linux-powered toaster from 2005 with 32x32 screen, or if they wrote it using Claude Code 9000.
At the end of the day, the person who's submitting the code (or signing off a review) is responsible for their actions.
So in a round-about way, to answer your question: I think AI as part of the review is fine. As impressive as their output can be sometimes be, it can be both impressively good and impressively bad. So no, only relying on AI for review is not enough.
The PR touched a lot of internals, including module code and mirrors the fs APIs. So, yes it was big, but the commit history was largely clean and followed a development story, and it was tested. The code quality was decent too. I didn't review all of it because I don't have a personal stake in this though.
I suggest EVERYONE in this thread go read the the GitHub PR in question. There's some good arguments for and against AI, and what it means for FOSS... But good lord you will have to sift through the virtue signalling bullshit and have patience for the constant moving of goalposts
I think it's just the GPL family of licenses that tend tend to cause most problems. I appreciate their intent, but the outcome often leaves a lot to be desired.
The GPL exists for the benefit of end users, not developers. It being a chore for developers who want to deny their users the software freedoms is a feature, not a bug.
They have the right to use the code, and they have the right to use improvements that someone else made, and they have the right to get someone to make improvements for them.
They also have the guarantee that the code licensed under the GPL, and all future enhancements to it, will remain free software. The same is not true of the MIT license's weak-copyleft.
As far as I know, all the (L)GPL does is make sure that if A releases some code under it, then B can't release a non-free enhancement without A's permission. A can still do whatever they want, including sell ownership to B.
Neither GPL nor MIT (or anything else) protects you against this.
(EDIT) scenario: I make a browser extension and release v1 under GPL, it becomes popular and I sell it to an adtech company. They can do whatever they want with v2.
By allowing them to benefit from the work of others who do. Directly or indirectly.
I’m not good at car maintenance but I would benefit from an environment where schematics are open and cars are easy to maintain by everyone: there would be more knowledge around it, more garages for me to choose from, etc.
Isn't the legal situation the opposite here? Car manufacturers don't release schematics because they believe in "free as in freedom". In fact any interfaces you as an end-user or an independent garage can use and schematics that are released such as the protocol for the diagnostic port, are open primarily because govermnents made laws saying so.
I'm most familiar with the "right to repair" situation with John Deere, which occasionally pops up on HN. The spirit of someone who releases something under GPL seems the opposite of that?
If you have ill intentions or maybe you're a corporation that wants to use someone else's work for free without contributing anything back, then yes, I can see how GPL licenses "tend to cause problems".
Why? What's your problem with them? They do exactly what they're supposed to do, to ensure that future derivatives of the source code have to be distributed under the same license and distribution respects fundamental freedoms.
I like to think about GPL as a kind of an artistic performance and an elaborate critique of the whole concept of copyright.
Like, "we don't like copyright, but since you insist on enforcing it and we can't do anything against it, we will invent a clever way to use your own rules against you".
That is not really the motivation behind GPL licenses. These licenses have been designed to ensure by legal means that anyone can learn from the source code of software, fix bugs on their own, and modify the software to their needs.
Wtf are these comments? A LGPL licensed project, guaranteed to be free and open source, being LLM-washed to a permissive license, and GPL is the problem here?
They are literally stealing from open source, but it's the original license that is the issue?
That whole clean room argument makes no sense. Project changed governance and was significantly refactored or reimplemented... I think the maintainers deserve to call it their own. Original-pre MIT release can stay LGPL.
I don't think this is a precedent either, plenty of projects changed licenses lol.
I keep kind mixing them up but the GPL licenses keep popping up as occasionally horror stories. Maybe the license is just poorly written for today's standards?
Ok since this is not really answered... Hypothetically, If I'm a maintainer of this project. I decided I hate the implementation, it's naive, horrible performance, weird edge cases. I'm wiser today than 3 years ago.
I rewrite it, my head full of my own, original, new ideas. The results turn out great. There's a few if and while loops that look the same, and some public interfaces stayed the same. But all the guts are brand new, shiny, my own.
You have all rights to the code that you wrote that is not "colored" by previous code. Aka "an original work"
But code that is any kind of derivative of code before it contains a complex mix of other peoples rights. It can be relicensed, but only if all authors large and small agree to the terms.
You have rights, but if it's a derivative, the original author might have rights too. If you made a substantial creative input, the original author can't copy your project without your permission, but neither can you copy theirs.
Hmm are we in a ship of Theseus/speciation area? Each individual step of refactoring would not cross the threshold but would a rewrite? Even if the end result was the same?
Let us also remember that certain architectural changes need to happen over a period of planned refractors. Nobody wants to read a 5000 line shotgun-blast looking diff
So effective, LGPL means you freely give all copyright for your work to the license holder? Even if the license holder has moved on from the project?
What if I decide to make a JS or Rust implementation of this project and use it as inspiration? Does that mean I'm no longer doing a "clean room" implementation and my project is contaminated by LGPL too?
If a copyright holder does not give you permission, you can't legally relicense. Even if they're dead.
If they're dead and their estate doesn't care, you might pirate it without getting sued, but any recipient of the new work would be just as liable as you are, and they'd know that, so I probably wouldn't risk it.
Governance change or refactoring don’t give you a right to relicense someone else’s work. It needs to be a whole new work, which you own the copyright to.
They didn't make them hard by design, I think, it's just the limitations of the current API and prioritisation. Dynamic queries are possible, just not trivial
Oh right, I know what you mean now. I was thinking more along the lines of the QueyBuilder API (and you can write extensions traits to make things more ergonomic). But yeah, some of their APIs work only/best with static strings.
There also sea-query, and sea-orm was already mentioned!
There's lots of things you could do. Imagine you're making a group chat bot (way more difficult than a 1-1 chat) where people can play social games by giving the LLM game rules. You can have an agent that only manages game state using natural language (controlled by the main LLM). You could have an agent dedicated to remembering important conversation, while not paying attention to chit-chatting
Just Google it. There's tons of research on this so I don't know why I need to provide a specific link when this is common knowledge.
But also here is something to think about: your body will produce more D3 than that by being in the sun for just several minutes. So if you consider such a low dose of D3 an overdose then you better steer clear of the sun!
> But also here is something to think about: your body will produce more D3 than that by being in the sun for just several minutes. So if you consider such a low dose of D3 an overdose then you better steer clear of the sun!
This is another superficial statement, that displays shallow-at-best understanding. Staying in the sun and producing via the skin, and intake via food are 2 separate pathways. You cannot just make wild assumptions about one of those pathways from stuff you know about the other pathway.
And actually: Yes, you shouldn't stay in the sun for too long without proper protection. Having the sun shine on your skin is not some inherently healthy thing. It too comes with acceptable dosage and overdose. Symptoms of overdose are commonly known as getting a sunburn.
You can find scientific papers on a lot of search engines, not only Google.
The problem with that is, that you still need to know how to interpret any results and statements within the supposedly scientific papers. If you are not a statistician, you might overlook methodology mistakes. If you are not an expert in the matter of the paper, you might not realize some side condition, that makes some statement or result of the paper irrelevant for your individual situation.
Also, I assume some Windows version jumps didn't make things easy for Wine either lol
reply