Hacker Timesnew | past | comments | ask | show | jobs | submit | twitchard's commentslogin

Coreutils gets updates regularly! https://gitweb.git.savannah.gnu.org/gitweb/?p=coreutils.git

Even `ls` gets news flags from time to time.

I think "stopping" is great for software that people want to be stable (like `ls`) but lots of software (web frameworks, SaaS) people start using specifically because they want a stream of updates and they want their software to get better over time.


The bar for adding new options, especially short options, is quite high for coreutils. We have a (likely outdated) page of rejected requests [1]. Some of the changes people have strong feelings about...

[1] https://www.gnu.org/software/coreutils/rejected_requests.htm...


Not sure DORA is that much of an indictment. For "Change Failure Rate" for instance these are subject to tradeoffs. Organizations likely have a tolerance level for Change Failure Rate. If changes are failing too often they slow down and invest. If changes aren't failing that much they speed up -- and so saying "change failure rate hasn't decreased, obviously AI must not be working" is a little silly.

"Change Lead Time" I would expect to have sped up although I can tell stories for why AI-assisted coding would have an indeterminate effect here too. Right now at a lot of orgs, the bottle neck is the review process because AI is so good at producing complete draft PRs quickly. Because reviews are scarce (not just reviews but also manual testing passes are scarce) this creates an incentive ironically to group changes into larger batches. So the definition of what a "change" is has grown too.


Re: model Supremacy -- I think there's so much cross-pollination between the AI labs that this is unlikely. Everybody at these labs basically know what everybody at the other labs are doing. *data* can be hard to get but I don't think acquiring good data is positive feedback loop-y where whoever wins just skyrockets away from everybody else.

Model stagnation, I thought stagnation was coming last year, then Opus 4.5 came out. I think maybe the models are slowing at getting *smarter* per se, but they are still getting better at coding. And even if they stopped getting better at coding, if they got as good as they have gotten at coding on other fields of inquiry (like, say, writing) so I think we've got a ways to go yet before the progress in terms of economic usefulness slows down.


I don't think 10 minutes of AI experience is enough to be a master? AI-assisted software engineering is a skill that I want to get better at, just like traditional software engineering is.


Twice as fast, half as costly, too!


Why do you think that the human mind can contain semantics but a machine cannot? This argument needs some sort of dualism, or what Turing called "the objection from continuity" to account for this.

FWIW I don't think that the "triangularity" in my head is the true mathematical concept of "triangularity". When my son e.g. learned about triangles, at first the concept was just a particular triangle in his set of toy shapes. Then eventually I pointed at more things and said "triangle" and now his concept of triangle is larger and includes multiple things he has seen and sentences that people have said about triangles. I don't see any difficulty with semantics being "a matter of image", really.

Why do we believe that semantics can exist in the human mind but cannot exist in the internals of a machine?

Really "semantics"

I had come across this Catholic philosopher: https://edwardfeser.blogspot.com/2019/03/artificial-intellig... who seems to make a similar argument to this; i.e. that it's the humans who give meaning to things, "logical symbols on a piece of paper are just a bunch of meaningless ink marks"


> Why do you think that the human mind can contain semantics but a machine cannot? [...] Why do we believe that semantics can exist in the human mind but cannot exist in the internals of a machine?

Because I know human minds have semantic content (it would be incoherent to deny it, as the denial itself involves concepts), and because I know the definition of what a computer is, which is that it is a purely syntactic formalism. Anyone who knows the history of computer science will know that computation was intentionally defined as a purely syntactic process. And because it is syntactic, we can mechanize it using physical processes. And no amount of syntax ever amounts to semantics, just as so matter how many natural numbers you add, you'll never get a pineapple or even the number pi. How could it?

Whether this entails dualism or not depends on what you mean by "dualism". It does not entail Cartesian dualism, though a Cartesian dualist can accept this view as presented.

> seems to make a similar argument to this; i.e. that it's the humans who give meaning to things, "logical symbols on a piece of paper are just a bunch of meaningless ink marks"

We don't give meanings to things per se. The meaningless ink marks on a piece of paper mean just that: ink marks on a piece of paper. Those are still meanings. However, writing involves the instrumentalization of physical things to make conventional signs, and signs are things that stand in for something else. So, yes, we can make ink marks with which we associate certain meanings and agree to a convention so that we can communicate.

> FWIW I don't think that the "triangularity" in my head is the true mathematical concept of "triangularity".

What is the "true mathematical concept"?

Concepts can be vague (though triangularity per se is so crisp and simple that I reject the idea that you don't have a clear idea of "triangularity" as such), and we usually do not explicitly grasp all that's entailed by them. For example, people knew what triangles were before they learned that the sum of their angles is always 180 degrees. The latter falls out of an analysis of the concept. And this law applies to all triangles because it necessarily falls out of the concept of triangularity, not because we've empirically shown that all triangles seem to have this property, approximately.

> I don't see any difficulty with semantics being "a matter of image", really.

Your son, as he was learning, was abstracting from these individual examples. He realized that you don't mean this triangle, or that triangle, but something both have in common, and ultimately, that is triangularity, which is not just a property or feature of a given triangle, like "green" as in "green triangle", but the what of a triangle. But if you reduce concepts to images, you end up with problems and paradoxes. For example, why should a collection of these things, to the exclusion of those things, be triangles? Or the number three: you have never encountered the number three. Or the notion of similarity between images. There are well known issues with an imagist notion of the mind.


> What Turing was trying to do, is to isolate this "hard problem of consciousness" and separate it from easier problems we can actually answer.

Yes exactly. As a computer scientist this is a great thing to do, science is all about taking mushy concepts like "intelligence" and extracting simplified versions of them that are more tractable in technical settings. The trouble is, Turing doesn't seem to want to stop at merely arguing that forgetting about interior consciousness is useful for technical discussions -- he seems to think that interior consciousness shouldn't be important for philosophical or popular notions of thinking and intelligence, either, and that they should update to use something like his test.

So even if you updated the Turing Test for 2025 the church would probably still be writing "Antiqua et Nova" to remind people that -- yes, interior consciousness exists and is important and robot intelligence really isn't the same as human intelligence without it.


I think you misunderstood what I said.

I don't believe a 2025 version would solve the hard problem of consciousness, or even contribute meaningfully to solve it.

The way I see it, the church is using _an even older_ version of the same line of thought experiments.


Crazy how many voice AI related updates there were this week. Grok voice mode, Alexa+, Hume OCTAVE, Elevenlabs Scribe SST... big day for Voice AI!


Crazy how many Voice AI announcements there were today: - Octave - Alexa+ - Elevenlabs "Scribe" speech-to-text

This is all coming on the heels of Grok 3's voice mode last week.


Let me get this straight: you can be better than 90% of people if you just read a book, but wait, not just any book, it has to be the right book -- and also it doesn't have to be a book, it can also just be "lived experience" or technical documentation, that counts too.

At this point, the thesis is more qualification than statement. Mostly what I drew from the article is that the author feels smugly superior to many of his peers, and wants an excuse ("they didn't even read a book") to morally blame them for their (perceived) shortcomings, while serving up a generous helping of false modesty on the side.


You can obviously substitute a book for whatever you want, but a great book is a huge accelerator. And yeah, it has to be good, which doesn't seem very controversial. Fluent Python is a great example of this. I could, with enough effort and time, piece together some of the philosophy underlying the language's design, but someone has already put a huge amount of pedagogical effort into pre-chewing a lot of that food for me. This probably isn't intrinsic to books (I really like Josh Comeau's CSS course, for example, which has book-tier thought put into it) but I do think that books attract authors who are thoughtful, and the form factor makes them pleasant to revisit. Even some of the suspect, like The Mythical Man Month, have some beautiful prose and thought that I think wouldn't appear typically appear in a more colloquial format like a YouTube series.

I do appreciate you taking the time to read my mind for false modesty and vaguely insult me though, thank you!


A paragraph like

> ...I am clearly worse than almost everyone that emails me along all of these dimensions. I only have a dim understanding of how my 3-4 years of experience coming from a strong background in psychology has rounded to "senior engineer", I've only ever written tests for personal projects because no employer I've ever seen actually had any working tests or interest in getting them, and I wrote the entirety of my Master's thesis code without version control because one of the best universities in the country doesn't teach it. In short, I've never solved a truly hard problem, I'm just out here clicking the "save half a million dollar" button that no one else noticed. I'm a fucking moron.

comes off to me as false modesty in the context of an essay that characterizes the majority of industry colleagues as "drowning sleepwalkers." Take it as a criticism of your writing persona, not a personal insult. You are right that I can't read minds, only the text in front of me.

I am glad you are mentioning specific books about software here in the comments. The essay had a very thoughtful and detailed discussion of books about drawing. If it had kept that energy when it turned to discussing software, instead of retreating into taking potshots at "the average developer", consultants, etc., it would come off to me like a persuasive essay rather than a self-congratulatory smugpiece.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: