Hah, I did the same exact thing and came here to say that :) I was looking at wiring diagrams and telling myself I could wire up some arduino circuits for it but gave up when when I realized I could just press the button!
edit: although mine was an ancient system from the early 90s. It was just replaced with a modern system a couple months ago. At my previous apartment I had wanted to set up a system that would allow either my then partner or I to activate the callbox and have it set for a VOIP number since we could only put one number on the box.
Employers and landlords do that sort of thing all the time. Rent goes up, job descriptions change, return to office is suddenly required. And yeah, you can get a different job or a different home if you don't like it.
> if you exclude the enslaved, the south had a higher GDP per capita than the north.
In other words, if you remove the people that earned the least (close to nothing) the overall income per capita goes up? If you exclude the non nobles I am sure the middle ages had a very high GDP too
ah, but you can always just ask the LLM questions about how it works. it's much easier to understand complex code these days than before. and also much easier to not take the time to do it and just race to the next feature
Indeed. But Jules is not really questions-based (it likes to achieve stuff!) and the free version of Codeium is terrible and does not understand a thing. I think I'll have to get into agentic coding, but I've been avoiding it for the time being (I rather like my computer and don't want it to execute completely random things).
Plus, I like the model of Jules running in a completely isolated way: I don't have to care about it messing up my computer, and I can spin up as many simultaneous Juleses as I like without a fear of interference.
I have looked but I can't find out if it actually means something. Does 89 seconds before midnight mean we have a 50% chance to survive the next N years somehow?
this is cool. not sure it is the first claude code style coding agent that runs against Ollama models though. goose, opencode and others have been able to do that a while no?
One thing that seems missing from a lot of these comparisons is the base rate of success for dieting itself.
Most people who “start a diet” never meaningfully lose weight in the first place, or lose a small amount and plateau quickly. The cohort of “dieters who regain weight” is already heavily filtered toward the minority who were unusually successful at dieting to begin with. That selection bias matters a lot when you then compare regain rates.
GLP-1s change that denominator. A much larger fraction of people who start the intervention actually lose substantial weight. So even if regain after stopping is faster conditional on having lost weight, the overall success rate (people who lose and keep off a clinically meaningful amount) may still be higher than dieting alone.
In other words: “people who regain weight after stopping GLP-1s” vs “people who regain weight after dieting” ignores the much larger group of dieters who never lost anything to regain. From a population perspective, that’s a pretty important omission.
You are the third person to mention that the cohort is "dieters who regain weight".
Reading the article and its referenced study, I thought the cohort was "all who were included in the non-placebo group of the RCT" and that the average was taken over all such subjects.
I've tried, can't find any evidence to the contrary. I am wrong and missing some key claim in the study? I would appreciate if you could support your claim.
> Weight regain data are expressed as weight change from baseline (pre-intervention) or difference in weight change from baseline between intervention and control for randomised controlled trials. When analysing and presenting data from all studies, we used weight change from single arm trials, observational studies, and the intervention groups from randomised controlled trials. When analysing data from randomised controlled trials only, we calculated the difference in weight change between the intervention and control groups at the end of the intervention and at each available time point after the end of the intervention. When studies had multiple intervention arms, we treated each arm as a separate arm and divided the number in the comparator by the number of intervention arms to avoid duplicative counting.19
> The results at year eight are heartening. Eight years later and 50.3 percent of the intensive lifestyle intervention group and 35.7 percent of the usual care group were maintaining losses of ≥5 percent, while 26.9 percent of the intensive group and 17.2 percent of the usual care group were maintaining losses of ≥10 percent.
The idea of "heartening" by an obesity doctor was that half of people lost a largely imperceptible amount of weight.
This was considered success at the time.
For comparison, to be on the edge of normal weight from the edge of obese is a 16% reduction.
This is not true. You have to procure it and take it consistently over a long period of time, there are side effects, and some people really dislike needles.
I was concerned about this too. Gemini informed me that the researchers "found that even when comparing people who had lost the same amount of weight, the rate of regain was significantly faster in the drug group (GLP-1s) than in the diet group (approximately 0.3 kg/month faster)."
Also, both groups contained those who didn't lose weight. They did not omit dieters who failed to lose weight or those who weren't "super responders."
Contrast this with taking the headline as fact without further scrutinizing it, which happens often. Or, look at the other posts here that are assuming that the cohort was restricted to only those who lost weight.
In an informal conversational context such as a forum, we don't expect every commentator to spend 20 minutes reading through the research. Yet we now have tools that allow us to do just that in less than a minute. It was not long ago that we'd be justified to feel skeptical of these tools, but they've gotten to the point where we'd be justified to believe them in many contexts. I believed it in this case, and this was the right time spent/scrutinization tradeoff for me. You're free to prove the claim wrong. If it was wrong, then I'd agree that it would be good to see where it was wrong.
Probably many people are using the tools and then "covering" before posting. That would be posting it as "fact". That's not what I did, as I made the reader aware of the source of the information and allowed them to judge it for what it was worth. I would argue that it's actually more transparent and authentic to admit from where exactly you're getting the information. It's not like the stakes are that high: the information is public, and anyone can check it. Hacker News understandably might be comparably late to this norm, as its users have a better understanding of the tech and things like how often they hallucinate. But I believe this is the way the wind is blowing.
I'm not sure exactly what you're asking. What I meant was that, for example, before you might've needed to track down where to find the underlying research paper, then read through the paper to find the relevant section. That might've taken 20 minutes for a task like this one. Now you can set an LLM on it, and get a concise answer in less than a minute.
Google-as-the-new-Microsoft feels about right. Windows 1 was a curiosity, 2 was “ok”, and 3.x is where it started to really win. Same story with IE: early versions were a joke, then it became “good enough” + distribution did the rest.
Gemini 3 feels like Google’s “Windows 3 / IE4 moment”: not necessarily everyone’s favorite yet, but finally solid enough that the default placement starts to matter.
If you are the incumbent you don't need to be all that much better. Just good enough and you win by default. We'll all end up with Gemini 6 (IE 6, Windows XP) and then we'll have something to complain about.
reply