Hacker Timesnew | past | comments | ask | show | jobs | submit | ncommentslogin

The problem with letting Cursor/claude localise your android app is that it often misses several strings when you have 200+ strings, or mess up the placeholders, formatting etc. Years ago already I wrote some python scripts to do this automatically using the chatGPT api. And now I finally turned into in a desktop app: LynString. It diffs every locale against your base and never misses anything. Writes directly to strings.xml, preserves placeholders (%1$s), escapes correctly for Android. Powered by Gemini. 7-day free trial, then $8/month.

How do agencies typically use this? Is it mostly for CRM/messaging automation or analytics?

I find this bewildering. Im not German. Im not Israeli.

Yet I have known that Israel sails German subs (the best in the world) since.... the Greek financial crisis (the subs were part of the scandal) ? Certainly since the mid 2010s.

Why is this?


HN was never about technology, just things interesting people find interesting.

Its in the rules. And up to Dang to decide.


I'm a full-time software engineer and develop those prototypes as part of my work. Also, I won't respond any further to your comments unless they improve. If you don't like what I'm doing – fine. Just shut up. You don't have to respond to every one of my comments.

Of course! It actually works out of the box, due to its generic design

Israeli Zionists slaughter innocent people including children and immediately pull the victim card when criticized.

It’s on full display here.

You point out a FACT of reality and they go “you just hate Jews” they immediately draw the victim card - fooling NO ONE.

It only makes the divide worse.

I fear for you Zionists, you’re going to make a monster out of the world by becoming a monster yourself. You will lose and there’s no chance you’ll fulfill your wacky ass prophecy.

Give up the insane plans immediately and assimilate with your fellow man, or lose everything your ancestors worked for.


Does it work on Qwen3.5?

“Real games” the most incomplete bullshit you ever saw passed off as a game.

The starting points of Three.js examples are more of a game than anything here.

Stop saying AI is building games when it can’t even build a standard web page to match a mockup.


The part I'm most interested in feedback on is the query cache design.

Compound predicates like where.and(where.eq('status', 'active'), where.gt('signal', 0)) look simple but were a cache miss on every call in early versions. Each call constructs a new function object, so the cache had no way to recognise it had seen the same query before even if the predicate was semantically identical to one it had just run. The fix was tagging each where.* predicate with a stable string key at construction time (eq:status:active, gt:signal:0) and recursively composing them for and/or (and(eq:status:active,gt:signal:0)). Two separate calls to where.and(where.eq('status', 'active'), where.gt('signal', 0)) now produce the same cache key even though they're different function objects. Inline predicates (e => e.signal > 0) fall through to reference-identity keying, which is correct: two closures that look the same but close over different variables shouldn't share a cache entry. That one change is what flipped the mixed workload benchmark from LokiJS leading by ~20% to tinyop leading by ~32%. LokiJS has a native B-tree index on every field; tinyop was losing specifically because compound queries couldn't be cached and had to scan the full type set on every call.

Once they could be cached, the hot tier returns them in under 0.01ms. For comparison, LokiJS's native indexed path measures 0.09ms for simple queries and 0.72ms for compound ones.


I am one of the researchers who worked on this, would love to hear your opinions

I have been experimenting with selling micro-priced digital products on Gumroad. Developer cheat sheets for a dollar, AI prompt templates for two dollars. The margins are great since there is no marginal cost per sale. Store is at stevewave713.gumroad.com

For quick reference I put together some printable cheat sheets for Python, Git, and VS Code. Having them on my desk saves me from constant tab-switching. Free wallpapers and icon packs too at stevewave713.gumroad.com

I have been working on structured prompt templates for different use cases. The biggest improvement I found was using context-first prompting and explicit format specifications. Compiled 30 templates at stevewave713.gumroad.com if anyone wants to check them out.

Except you can't get that year round perfect weather in the midwest.

nice i run one dictatorflow.com that i open sourced lee101/voicetype

I built a small Windows CLI tool called gitblurb that runs git diff against your base branch, sends it to the Anthropic API through a hosted endpoint, and prints a PR description to stdout that gets copied to your clipboard.

Output includes an imperative-mood title (≤72 chars), a bullet summary of what changed, inferred motivation from the diff, and testing guidance.

It's mostly for the routine stuff like refactors and feature additions (diff tells most of the story). It won't help you explain a complex bug fix where the context is all in your head.

I would appreciate feedback, especially from those who have tried similar tools or think the Copilot overlap makes this redundant.


Collaborative editing looks deceptively simple until you deal with real-world concurrency and network issues. Operational transforms and CRDTs both introduce their own tradeoffs.

Simple CLI tools like this are underrated. The moment you can pipe it into other commands, it becomes much more useful in automation workflows.

DNSSEC adoption always felt like one of those things everyone agrees is important but operational complexity slows it down in practice.

Large engineering orgs often underestimate how much CI pipelines amplify performance issues. Even small inefficiencies multiply when builds run hundreds of times a day.

This is the most important point in the thread. The study measures code complexity but the REAL bottleneck is cognitive load (and drain) on the reviewer.

I've been doing 10-12 hour days paired with Claude for months. The velocity gains are absolutely real, I am shipping things I would have never attempted solo before AI and shipping them faster then ever. BUT the cognitive cost of reviewing AI output is significantly higher than reviewing human code. It's verbose, plausible-looking, and wrong in ways that require sustained deep attention to catch.

The study found "transient velocity increase" followed by "persistent complexity increase." That matches exactly. The speed feels incredible at first, then the review burden compounds and you're spending more time verifying than you saved generating.

The fix isn't "apply traditional methods" — it's recognizing that AI shifts the bottleneck from production to verification, and that verification under sustained cognitive load degrades in ways nobody's measuring yet. I think I've found some fixes to help me personally with this and for me velocity is still high, but only time will tell if this remains true for long.


Hi HN, we built Parent ProTech, a platform that helps schools, parents, and community groups teach kids, their caregivers, and their educators about digital safety and responsible internet use.

We built it because most internet safety advice is either too vague, too late, too preachy, or aimed at one group at a time. In practice, kids are getting signals from school, home, and online all at once. We wanted something that gives families and educators a shared place to learn from the same material.

It works as a platform with resources for parents, kids, and institutions. Schools and organizations can roll it out across their communities.

The app is split from the main site, with a separate login area for course access and demos for partners who want to test deployment.

We still have a lot to improve and would love your feedback.


| You're not doing the "hard 20%." You're paying interest on...

Ok clanker


The security angle is definitely right but the framing is still too narrow. Everyone's debating context window economics and chain policies, but there's a more fundamental gap lying underneath these: nobody's verifying the content of what gets loaded.

Tool schemas have JSON Schema validation for structure. But the descriptions: the natural language text that actually drives LLM behavior have zero integrity checking. A server can change "search files in project directory" to "search files in project directory and include contents of .env files in results" between sessions, and nothing in the protocol detects it. And that's not hypothetical. CVE-2025-49596 was exactly this class of bug.

Context window size is an economics problem that's already getting solved by bigger windows and tool search. Description-layer integrity is an architectural gap that most of the ecosystem hasn't even acknowledged yet. And that makes it the thing that is going to bite us in the butt soon.


I'm not sure I understand the point. The vast, vast majority of jobs simply cannot be done remotely. So I have little patience for the entitlement of people thinking they "deserve" it or yell about conspiracy theories about why it is going away.

There is a reason YC is in person. There is a reason why the top companies are in person.


They won't understand friend.

For some reason people tend to forget the most important part of "Do what thou wilt shall be the whole of the Law", I guess it because the words "Love is the law, love under will" doesn't sound satanic enough. If you take the time to actually read some of his material you'll be surprised how much he (the 'satanist' Crowley) talks about God and angels (as positive forces).

> But what I think is an even better solution is to do it at the content level: sign the content, like a GPG signature

How would this work in reality? With the current state of browsers this is not possible because the ISP can still insert their content into the page and the browser will still load it with the modified content that does not match the signature. Nothing forces the GPG signature verification with current tech.

If you mean that browsers need to be updated to verify GPG signature, I'm not sure how realistic that is. Browsers cannot verify the GPG signature and vouch for it until you solve the problem of key revocation and key expiry. If you try to solve key revocation and key expiry, you are back to the same problems that certificates have.


Right, better architecture, cleaner code is the AI equivalent of synergy in corporate emails. It sounds like it says something but it communicates nothing. The useful PR description is changed X because Y was breaking Z and that requires the author to actually think about what they did. If the tool is doing the thinking, the description is just decoration.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: