Hacker Timesnew | past | comments | ask | show | jobs | submit | dostick's commentslogin

Reassign the button - maybe possible with Karabiner

Its gotten so bad that Claude will pretend in 10 of 10 cases that task is done/on screenshot bug is fixed, it will even output screenshot in chat, and you can see the bug is not fixed pretty clear there.

I consulted Claude chat and it admitted this as a major problem with Claude these days, and suggested that I should ask what are the coordinates of UI controls are on screenshot thus forcing it to look. So I did that next time, and it just gave me invented coordinates of objects on screenshot.

I consult Claude chat again, how else can I enforce it to actually look at screenshot. It said delegate to another “qa” agent that will only do one thing - look at screenshot and give the verdict.

I do that, next time again job done but on screenshot it’s not. Turns out agent did all as instructed, spawned an agent and QA agent inspected screenshot. But instead of taking that agents conclusion coder agent gave its own verdict that it’s done.

It will do anything- if you don’t mention any possible situation, it will find a “technicality” , a loophole that allows to declare job done no matter what.

And on top of it, if you develop for native macOS, There’s no official tooling for visual verification. It’s like 95% of development is web and LLM providers care only about that.


> I consulted Claude chat and it admitted this as a major problem with Claude these days, and suggested that I should ask what are the coordinates of UI controls are on screenshot thus forcing it to look

If 3 years into LLMs even HNers still don't understand that the response they give to this kind of question is completely meaningless, the average person really doesn't stand a chance.


The whole “chat with an AI” paradigm is the culprit here. Priming people to think they are actually having a conversation with something that has a mind model.

It’s just a text generator that generates plausible text for this role play. But the chat paradigm is pretty useful in helping the human. It’s like chat is a natural I/O interface for us.


I disagree that it’s “just a text generator” but you are so right about how primed people are to think they’re talking to a person. One of my clients has gone all-in on openclaw: my god, the misunderstanding is profound. When I pointed out a particularly serious risk he’d opened up, he said, “it won’t do that, because I programmed it not to”. No, you tried to persuade it not to with a single instruction buried in a swamp of markdown files that the agent is itself changing!

I insist on the text generator nature of the thing. It’s just that we built harnesses to activate on certain sequences of text.

Think of it as three people in a room. One (the director), says: you, with the red shirt, you are now a plane copilot. You, with the blue shirt, you are now the captain. You are about to take off from New York to Honolulu. Action.

Red: Fuel checked, captain. Want me to start the engines?

Blue: yes please, let’s follow the procedure. Engines at 80%.

Red: I’m executing: raise the levers to 80%

Director: levers raised.

Red: I’m executing: read engine stats meters.

Director: Stats read engine ok, thrust ok, accelerating to V0.

Now pretend the director, when heard “I’m executing: raise the levers to 80%”, instead of roleplaying, she actually issue a command to raise the engine levers of a plane to 80%. When she hears “I’m executing: read engine stats”, she actually get data from the plane and provide to the actor.

See how text generation for a role play can actually be used to act on the world?

In this mind experiment, the human is the blue shirt, Opus 4-6 is the red and Claude code is the director.


For context I've been an AI skeptic and am trying as hard as I can to continue to be.

I honestly think we've moved the goalposts. I'm saying this because, for the longest time, I thought that the chasm that AI couldn't cross was generality. By which I mean that you'd train a system, and it would work in that specific setting, and then you'd tweak just about anything at all, and it would fall over. Basically no AI technique truly generalized for the longest time. The new LLM techniques fall over in their own particular ways too, but it's increasingly difficult for even skeptics like me to deny that they provide meaningful value at least some of the time. And largely that's because they generalize so much better than previous systems (though not perfectly).

I've been playing with various models, as well as watching other team members do so. And I've seen Claude identify data races that have sat in our code base for nearly a decade, given a combination of a stack trace, access to the code, and a handful of human-written paragraphs about what the code is doing overall.

This isn't just a matter of adding harnesses. The fields of program analysis and program synthesis are old as dirt, and probably thousands of CS PhD have cut their teeth of trying to solve them. All of those systems had harnesses but they weren't nearly as effective, as general, and as broad as what current frontier LLMs can do. And on top of it all we're driving LLMs with inherently fuzzy natural language, which by definition requires high generality to avoid falling over simply due to the stochastic nature of how humans write prompts.

Now, I agree vehemently with the superficial point that LLMs are "just" text generators. But I think it's also increasingly missing the point given the empirical capabilities that the models clearly have. The real lesson of LLMs is not that they're somehow not text generators, it's that we as a species have somehow encoded intelligence into human language. And along with the new training regimes we've only just discovered how to unlock that.


> I thought that the chasm that AI couldn't cross was generality. By which I mean that you'd train a system, and it would work in that specific setting, and then you'd tweak just about anything at all, and it would fall over. Basically no AI technique truly generalized for the longest time.

That is still true though, transformers didn't cross into generality, instead it let the problem you can train the AI on be bigger.

So, instead of making a general AI, you make an AI that has trained on basically everything. As long as you move far enough away from everything that is on the internet or are close enough to something its overtrained on like memes it fails spectacularly, but of course most things exists in some from on the internet so it can do quite a lot.

The difference between this and a general intelligence like humans is that humans are trained primarily in jungles and woodlands thousands of years ago, yet we still can navigate modern society with those genes using our general ability to adapt to and understand new systems. An AI trained on jungles and woodlands survival wouldn't generalize to modern society like the human model does.

And this makes LLM fundamentally different to how human intelligence works still.


> And I've seen Claude identify data races that have sat in our code base for nearly a decade

how do you know that claude isn't just a very fast monkey with a very fast typewriter that throws things at you until one of them is true ?


Iteration is inherent to how computers work. There's nothing new or interesting about this.

The question is who prunes the space of possible answers. If the LLM spews things at you until it gets one right, then sure, you're in the scenario you outlined (and much less interesting). If it ultimately presents one option to the human, and that option is correct, then that's much more interesting. Even if the process is "monkeys on keyboards", does it matter?

There are plenty of optimization and verification algorithms that rely on "try things at random until you find one that works", but before modern LLMs no one accused these things of being monkeys on keyboards, despite it being literally what these things are.


For someone claiming to be an AI skeptic, your post here, and posts in your profile certainly seem to be at least partially AI written.

For someone claiming to be an AI skeptic, you certainly seem to post a lot of pro-AI comments.

Makes me wonder if this is an AI agent prompted to claim to be against AIs but then push AI agenda, much like the fake "walk away" movement.


I have an old account, you can read my history of comments and see if my style has changed. No need to take my word for it.

Tangential off topic, but reminds me of seeing so many defenses for Brexit that started with “I voted Remain but…”

Nowadays when I read “I am an AI skeptic but” I already know the comment is coming from someone that has just downed the kool aid.


> No, you tried to persuade it not to with a single instruction

Even persuade is too strong a word. These things dont have the motivation needed to enable persuation being a thing. Whay your client did was put one data point in the context that it will use to generate the next tokens from. If that one data point doesnt shift the context enough to make it produce an output that corresponds to that daya point, then it wont. Thats it, no sentience involved


> It’s just a text generator that generates plausible text for this role play.

Often enough, that text is extremely plausible.


I pin just as much responsibility on people not taking the time to understand these tools before using them. RTFM basically.

I think the mindset you have to have is "it understands words, but has no concept of physics".

It doesn’t help that a frequent recommendation on HN whenever someone complains about Claude not following a prompt correctly is to “ask Claude itself how to rewrite a prompt to get the result you want”.

Which sure, can be helpful, but it’s kinda just a coincidence (plus some RLHF probably) that question happens to generate output text that can be used as a better prompt. There’s no actual introspection or awareness of its internal state or architecture beyond whatever high level summary Anthropic gives it in its “soul” document et al.

But given how often I’ve read that advice on here and Reddit, it’s not hard to imagine how someone could form an impression that Claude has some kind of visibility into its own thinking or precise engineering. Instead of just being as much of a black box to itself as it is to us.


It’s not meaningless. It’s a signal that the agent has run out of context to work on the problem which is not something it can resolve on its own. Decomposing problems and managing cognitive (or quasi cognitive in this case) burden is a programmer’s job regardless of the particular tools.

I think you are saying what I was about to suggest:

For this single problem, open a new claude session with this particular issue and refining until fixed, then incorporating it into the larger project.

I think the QA agent might have been the same step here, but it depends on how that QA agent was setup.


> completely meaningless

This is way too strong isn't it? If the user naively assumes Claude is introspecting and will surely be right, then yeah, they're making a mistake. But Claude could get this right, for the same reasons it gets lots of (non-introspective) things right.


It's not too strong. If it answered from its weights, it's pretty meaningless. If it did a web search and found reports of other people saying this, you'd want to know that this is how it answered - and then you'd probably just say that here on HN rather than appealing to claude as an authority on claude.

They also said it "admitted" this as a major problem, as if it has been compelled to tell an uncomfortable truth.


Maybe I'm just being too literal, but I don't know if you're really disagreeing with me. I was disputing "the response they give to this kind of question is completely meaningless". An answer from its weights is out of date, but only completely meaningless if this is a completely new issue with nothing relevant in the training data. And, as you say, the answer could be search-based and up to date.

GP here, this is indeed exactly whT I was getting at, thanks for wording it for me; you put it better than I would've.

In this specific case I'd go one step further and say that even if it did a web search, it's still almost certainly useless because of the low quality of the results and their outdatedness, two things LLMs are bad at discerning. From weights it doesn't know how quickly this kind of thing becomes outdated, and out of the box it doesn't know how to account for reliability.


> And on top of it, if you develop for native macOS, There’s no official tooling for visual verification. It’s like 95% of development is web and LLM providers care only about that.

Thinking out loud here, but you could make an application that's always running, always has screen sharing permissions, then exposes a lightweight HTTP endpoint on 127.0.0.1 that when read from, gives the latest frame to your agent as a PNG file.

Edit: Hmm, not sure that'd be sufficient, since you'd want to click-around as well.

Maybe a full-on macOS accessibility MCP server? Somebody should build that!


Yeah, this is pretty much how Tidewave works, but passes the HTML/JavaScript reference instead of a picture: https://tidewave.ai/

Is this the same one I vaguely recall being implemented/launched by Phoenix/Elixir team?


I didnt realize how prolific the OpenClaw author was. Thanks for sharing!

There is a tool called Tidewave that allows you to point and click at an issue and it will pass the DIV or ID or something to the LLM so it knows exactly what you are talking about. Works pretty well.

https://tidewave.ai/


> And on top of it, if you develop for native macOS, There’s no official tooling for visual verification. It’s like 95% of development is web and LLM providers care only about that.

I think this is built in to the latest Xcode IIRC


Oh, no, I had these grand plans to avoid this issue. I had been running into it happening with various low-effort lifts, but now I'm worried that it will stay a problem.

You can provide the screencapture cli as a tool to Claude and it will take screenshots (of specific windows) to verify things visually.

>>It’s like 95% of development is web and LLM providers care only about that.

I've been trying to use it for C++ development and it's maybe not completely useless, but it's like a junior who very confidently spouts C++ keywords in every conversation without knowing what they actually mean. I see that people build their entire companies around it, and it must be just web stuff, right? Claude just doesn't work for C++ development outside of most trivial stuff in my experience.


Models are also quite good at Go, Rust, and Python in my experience — also a lot of companies are using TypeScript for many non web related things now. Apparently they're also really good at C, according to the guy who wrote Redis anyway.

It's working reasonably well for me. But this is inside a well-established codebase with lots of tests and examples of how we structure code. I also haven't used it much for building brand new features yet, but for making changes to existing areas.

GPT models are generally much better at C++, although they sometimes tend to produce correct but overengineered code, and the operator has to keep an eye on that.

This is why you need a red-green-refactor TDD skill

I mean, I don't use CC itself, just Claude through Copilot IDE plugin for 'reasons'...

At at least there it's more honest than GPT, although at work especially it loves to decide not to use the built in tools and instead YOLO on the terminal but doesn't realize it's in powershell not a true nix terminal, and when it gets that right there's a 50/50 shot it can actually read the output (i.e. spirals repeatedly trying to run and read the output).

I have had some success with prompting along the lines of 'document unfinished items in the plan' at least...


Codex via codex-cli used to be pretty about knowing whether it was in powershell. Think they might have changed the system prompt or something because it’s usually generating powershell on the first attempt.

Sometimes it tries to use shell stuff (especially for redirection), but that’s way less common rn.


Are you sure you're talking about Claude? Because it sounds like you're describing how a lot of people function. They can't seem to follow instructions either.

I guess that's what we get for trying to get LLM to behave human-like.


What if, stay with me here, AI is actually a communist plot to ensorcell corporations into believing they are accelerating value creation when really they are wasting billions more in unproductive chatting which will finally destroy the billionaire capital elite class and bring about the long-awaited workers’ paradise—delivered not by revolution in the streets, but by millions of chats asking an LLM to “implement it.” Wake up sheeple!

So many replies about doing the religion group thing, and op saying “great psychiatrist put me on bunch of antidepressants”, obviously this is U.S. centric. And optimistic in this way. I don’t see anyone not from U.S. replying and maybe it’s irrelevant, but think about it - maybe this is the end and you just counting the days until the end and nothing is coming, you will not meet anybody.

There’s Anytype and few others. Problem is not replacement , it’s adoption.

As if they are not allowed to make mistakes. If you watched more recent interviews, they understand that it was not cool on their part. Musically once per album they still have some interesting and somewhat innovative tracks still.

It seems like with time hallucinations and lying increased, it’s very different now from what it was 2 years ago. is this because of training bias ? Is there any research data on dynamics over past years ?

And how is it going with women?

In this case it should be 62% lower in Australia in general, because water is taken from deep source for entire country.

82%[0] of Australian utilities' water comes from surface sources, so you'd need to look at only the section of the population who drink exclusively groundwater.

[0] https://www.bom.gov.au/water/npr/docs/2022-23/Urban_National...


Apply on their website, they’ve been looking and I got interview just being iOS/macOS developer, no tools development exp.

https://jobs.apple.com/en-us/details/200586465-0836/xcode-in...

I was mostly joking, I am not from the US and not skilled enough to be considered for bothering with creating a visa for me when there are thousands of developers much more fit for this in the USA. But it is neat to see that the requirements are not as intense as I would've expected


Why couldn’t they name it `agent-bash` then? What’s with all the “just-this”, “super-that” naming? Like developer lost the last remaining brain cells developing it, and when it’s came to name it, used the first meaningless word that came up. After all you’re limiting discovery with name like that.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: