It’s funny… my initial reaction to your comment was that it’s a bit persnickety to expect that. However, I’m coming around to agreeing. I recently spent a non-trivial amount of time responding to a PR into one of my projects. I did have a sense it was mostly AI, but the changes were reasonable with a bit of adjustment. Wrote some feedback and guidance for the first time contributor and bam, they closed the PR, haven’t heard back.
Here's my optimistic take: the fundamental things that spark joy about learning a novel algorithm, pattern, technique, etc. haven't gone anywhere, and there's no reason to think those things won't continue to be interesting. Furthermore, it seems like reading code isn't going anywhere too soon, and that definitely benefits from clean code. It follows that someone who can actually recognize clean from spaghetti, and tell the LLM to refactor it into XYZ style, is going to be relatively more valuable.
Random side note: my teen son has grown up with iPhone-level tech, yet likes and finds my old Casio F91 watch very interesting. I still have faith :)
I doubt it. But I keep my own encrypted backup anyway (as I did with 1P, too), so realistically only the most recently added/updated passwords are at risk.
I think a core reason (besides not knowing jj exists), is the framing that there is a choice that has to be made, or a switch that has to occur. It is, instead, additive. I have Sublime Merge (GUI git client) and jj both looking at my git repo all day. Zed's git stuff is watching it too.
jj is sort of a bag of git tricks for me that I use when needed. It's no different than some things being easier with the git CLI vs others being easier in Sublime. I'll be at a stage where my committing/branching/rearranging wants are something that jj nails perfectly, and I do those there. As far at the other tools are concerned, I just did a bunch of sophisticated git operations.
The "colocated with git" capability of jj is probably it's most amazing feature tbh, and is key to any sort of adoption.
Yes, I watched a video[0] about using CHECKs and pg_jsonschema to do this in the past. However this only checks for conformance at insert/update time. As time goes on you'll inevitably need to evolve your structures but you won't be able to have any assurances to whether past data conforms to the new structure.
The way this article suggests using JSONB would also be problematic because you're stuffing potentially varying structures into one column. You could technically create one massive jsonschema that uses oneOf to validate that the event conforms to one of your structures, but I think it would be horrible for performance.
I’m completely not in this space but your comment had me wondering: are there digital cube faces? That is, a real physical cube but with faces that can instantly be set to a given color?
They exist, but one of the problems is they're not particularly good cubes. While it might help you learn the basics, not being able to handle it like a speedcube means they're probably not going to help you get faster.
That being said, while looking up those links, I found out that, since I got out of the hobby, smart cubes have become a thing, and are made by real speedcube manufacturers.
This is an easier problem to solve. I'm not sure if you have to solve it first or if it can identify pieces on power up, but after that it's just tracking rotations, which can be done from the (fixed position) centres alone. But if an actual speedcube manufacturer can already fit those electronics in without comprising performance, I can't imagine it's that much harder to fit some addressable LEDs on some slip-ring-esque connections. Must just not be much of a market.
Same. I also had one when the school mail-order book fair offered a diary with a padlock on it. I wasn't even into writing, but I thought a locking book was so cool. 8 weeks later it shows up and turns out to be just a regular hardcover diary with the cover printed to look like a padlock.
reply