What signs? This article is absolute slop (not AI, but slop).
I do not have any reason to doubt that people are doing insider trading. The US admin is obviously corrupt and the Iran attacks are the most abject symptom of its corruption so far.
But you can't just put out a bunch of completely isolated observations with zero analysis and say "that looks like insider trading". There is nothing at all in here that presents an argument for that claim.
I am a daily Guardian reader but I stopped paying for it coz there are so many articles like this that are just complete fucking trash. Because I am the target audience (leftist Euro who can easily get riled up by topics like this) it pisses me off when I feel I'm being manipulated.
I appreciate your perspective on this. We're living in an era of sensationalism and noise. It wont get better until a lot of people from every political disposition becomes tired of hollow words designed only to create BIG FEELINGS.
Imagine an era where the majority leans in on a balance of compassion for self and others.
> Eight accounts, all newly created around 21 March, bet a total of nearly $70,000 (£52,000) on there being a ceasefire.
Is that anomalous? Are these numbers large in context?
> They stand to make nearly $820,000 if such a deal is reached before 31 March.
Yes that is indeed how prediction markets work for unlikely events?
> An account that made the same bet was created shortly before the US struck Iran on 28 February. It also placed a winning bet on those strikes, which raised similar questions around insider trading, and so far has bet on nothing else.
Is that anomalous? If I was betting 5 figure sums I would also stick to my areas of expertise. That doesn't mean I'm an insider.
> The new accounts all appear to have been created late last week, around the time when the US president, Donald Trump, appeared to first double down on war with Iran, then suggest in an after-markets Truth Social post that he was considering “winding down” military operations.
So what??? Is anything about that anomalous? What is that supposed to tell us about the accounts?
> The wallets “definitely [look like] someone with some degree of inside info”, said Ben Yorke, formerly a researcher with CoinTelegraph, now building an AI trading platform called Starchild.
"Some random fucking guy said this thing", OK?
> But online crypto watchers and experts suggested that the bets bore the signs of insider trading – both because they bought their positions at market price,
What the fuck does that even mean?
> and because some of the accounts looked like they could belong to a single investor attempting to conceal their identity by splitting their bet between multiple wallets.
This is just repeating a former claim that was not backed up with any rationale. And note the very next sentence provides an alternative motive for traders to split wallets, aside from insider trading.
> “Typically, when you see wallet-splitting and deliberate attempts to obfuscate identity, it’s one of two scenarios: either a very large investor trying to shield their position from market impact, or insider trading,” said Yorke.
But we haven't been presented with any evidence that we're seeing either of those things?? And also I can't help repeating, why the fuck are we supposed to listen to this guy's opinion?
> Polymarket’s own rating of the probability of a ceasefire before 31 March increased significantly in the past few days, from 6% on 21 March to 24% by Monday. More than $21m is currently being wagered on this outcome.
Again, this is just describing the normal and intended mechanics of the market. It's not anomalous and it's not evidence of wrongdoing.
Also it makes the $70k figure from the beginning look pretty small.
Publications must love they can now pump out an article every time someone creates a new account to place a large wager on a prediction market.
Having been a semi-pro sports bettor for a short stint as it went through legalization in the U.S., I’ve personally had tens of thousands wagered on sports teams I’ve never heard of before. Over the course of thousands of bets it becomes statistically inevitable that you have wagers placed right before major news (both for and against you).
It’s even entirely possible this individual has some or all their position hedged on another platform effectively capture a tiny arbitrage in the market.
There’s tons of upstart market makers on these prediction markets doing hundreds of thousands in volume a week as they provide liquidity, grinding out small edges in a way you’d never be able to know their true exposure to any one market across platforms.
It’s of course entirely possible this is an insider, but as a journalist you need something more than a large bet + good timing. Out of millions of wagers there will inevitably be plenty of random people who bet on a football game 5 minutes before the quarterback gets injured purely out of dumb luck.
Exactly. I suspect part of the issue here is that people without some exposure to this type of probabilistic thinking are SO BAD at reasoning about market actors and uncertainty.
"A ceasefire looks very unlikely, why they hell would anyone bet on that? That's very suspicious" is obviously a completely fucking idiotic statement to you and me ("why would you buy this ugly empty lot in Manhattan? There's no buildings on it!"). Maybe to a 25 year old Guardian journo with a history degree from Oxbridge it's just common sense.
That is very disappointing coz I've been wanting to try an alternative to Gemini CLI for exactly these reasons. The AI is great but the actual software is a buggy, slow, bloated blob of TypeScript (on a custom Node runtime IIUC!) that I really hate running. It takes multiple seconds to start, requires restarting to apply settings, constantly fucks up the terminal, often crashes due to JS heap overflows, doesn't respect my home dir (~/.gemini? Come on folks are we serious?), has an utterly unusable permission system, etc etc. Yet they had plenty of energy to inject silly terminal graphics and have dumb jokes and tips scroll across the screen.
Is Claude Code like this too? I wonder if Pi is any better.
A big downside would be paying actual cost price for tokens but on the other hand, I wouldn't be tied to Google's model backend which is also extremely flaky and unable to meet demand a lot of the time. If I could get real work done with open models (no idea if that's the case yet) and switch providers when a given provider falls over, that would be great.
Note as of yesterday, they retired the lite coding plan you're talking about. New buy-in is $50/month for the pro plan unless you were already on the lite plan.
Claude will also happily write a huge pile of junk into your home directory, I am sad to report. The permissions are idiotic as well, but I always use it in a container anyway. But I have not had it crash and it hasn't been slow starting for me.
One of my FAANG security projects incidentally helped with some compliance efforts (I made very sure it was incidental, constantly said things like "I am thrilled that I can help you guys achieve your goals but I wanna be clear that I don't give a shit about compliance and I won't be allowing it to influence the direction of my product" in meetings, it must have been extremely annoying to work with me).
At some point I was asked to look over the documents for the compliance definition and it was really hilarious. I had to give my engineering perspective on which aspects of the requirements we were and weren't meeting.
But they were stuff like "you must have logs". "You must authenticate users". "You must log failed authentication attempts".
Did we fulfill these requirements? It's a meaningless question. Unless you were literally running an open door telnet service or something you could interpret the questions so as to support any answer you wanted to give.
So I just had to be like "do you want me to say yes?" and they did, so I said yes. Nothing productive was ever achieved during that engagement.
Style and structure is not the goal here, the reason people are interested in it is to find bugs.
Having said that, if it can save maintainers time it could be useful. It's worth slowing contribution down if it lets maintainers get more reviews done, since the kernel is bottlenecked much more on maintainer time than on contributor energy.
My experience with using the prototype is that it very rarely comments with "opinions" it only identifies functional issues. So when you get false positives it's usually of the form "the model doesn't understand the code" or "the model doesn't understand the context" rather than "I'm getting spammed with pointless advice about C programming preferences". This may be a subsystem-specific thing, as different areas of the codebase have different prompts. (May also be that my coding style happens to align with its "preferences").
In my case it's been a strong no. Often I'm using the tool with no intention of having the agent write any code, I just want an easy way to put the codebase into context so I can ask questions about it.
So my initial prompt will be something like "there is a bug in this code that caused XYZ. I am trying to form hypothesis about the root cause. Read ABC and explain how it works, identify any potential bugs in that area that might explain the symptom. DO NOT WRITE ANY CODE. Your job is to READ CODE and FORM HYPOTHESES, your job is NOT TO FIX THE BUG."
Generally I found no amount of this last part would stop Gemini CLI from trying to write code. Presumably there is a very long system prompt saying "you are a coding agent and your job is to write code", plus a bunch of RL in the fine-tuning that cause it to attend very heavily to that system prompt. So my "do not write any code" is just a tiny drop in the ocean.
Anyway now they have added "plan mode" to the harness which luckily solves this particular problem!
To my understanding, LLM, by design, is unable to encode negation semantics. Neither negation "operation", nor any other "subtractive" operations are computable in LLM machinery. Thinking out loud, in your example the "Read code" and "Form hypothesis" seem to be useful instructions for what you want, while "Do not write any code" and "Not to fix the bug" might actually be misleading for the model. Intuitively (in human terms) one would imagine that, when given such "instruction", LLM would be repelled from latent-space region associated with "write any code" or "fix the bug". But in reality LLM cannot be "repelled", it is just attracted to the region associated with full, negated "DO NOT <xxxx>". And this region probably either has a significant overlap with the former ("DO <xxx>") or even includes it wholesale. This may explain why it sometimes seems to "work" as intended, albeit accidentally. My 2c.
I see this at my $megagorp job. The top brass don't do that much written communication, but when they do they are absolutely shooting from the hip. It's not as bad as Epstein but it's a strong "I've already started reading the next email while I'm typing this one" vibes.
FWIW I don't have a problem with it at all. As the article mentioned there's an aspect of power politics (I'm important enough not to have to worry about formatting). But to me instead of <I wish elites weren't so callous with text> I feel <everyone should feel empowered to write like that> (again, maybe not quite to the level of Epstein, but e.g. capitalisation is just unimportant. Signing off emails with "best wishes" is not a good use of anyone's 500 milliseconds).
>capitalisation is just unimportant. Signing off emails with "best wishes" is not a good use of anyone's 500 milliseconds
Yet I'm on Twitter reading "Prison for attempted murderer enablers like this clown" by the world's richest man who is tweeting all day. My guess is that it has just become a way of status signalling more than anything else.
Natural languages have inherent ambiguity. That includes your grammar with capitalization, any kind of standard english grammar of which there are dozens
Which person does Jack refer to? What if you have 2 friends named Jack? Does "horse" refer to a member of a class of animal or something else? Sorry but your examples are full of indecipherable nonsense. But I guess if you just pretend that everything you write is well understood then there is no problem.
Capitalization slightly narrows a search space that is already narrow, since that is it's only functional use it should only be used when appropriate. If every rule was applied at every instance your writing would both become indecipherable and you'd subtly change your intended meaning. Better to be misunderstood by some than to water down your message and add class/prestige/formality/distance all of which are inappropriate in most writing.
I guess your teacher gave you that example, but you ABSOLUTELY FAILED to understand the meaning of their lesson.
This is perhaps the silliest possible response I could imagine to what is intended to be an amusing example and non-illustrative of the more common real-world confusions.
Which are real.
> I guess your teacher gave you that example, but you ABSOLUTELY FAILED to understand the meaning of their lesson.
Wow, you sure are defensive about the notion that communications protocols are most useful when they are consistent and predictable. You may think you've nailed me as an illiterate, but I conclude that you've nailed yourself as a tilter at windmills.
Contrived examples are fun but have nothing to do with the actual reasons people demand "correct" writing. These confusions do not happen in real life.
The reason people actually care is only ever to do with in-group signalling or power politics.
You are always gonna have some downtime in a homelab setup I think. Unless you go all in with k8s I think the best you can do is "system reboots at 4AM, hopefully all the users are asleep".
(Probably a lot of the services I run don't even really support HA properly in a k8s system with replicas. E.g. taking global exclusive DB locks for the lifetime of their process)
> You are always gonna have some downtime in a homelab setup I think. Unless you go all in with k8s I think the best you can do is "system reboots at 4AM, hopefully all the users are asleep".
Huh, why? I have a homelab, I don't have any downtime except when I need to restart services after changing something, or upgrading stuff, but that happens what, once every month in total, maybe once every 6 months or so per service?
I use systemd units + NixOS for 99% of the stuff, not sure why you'd need Kubernetes at all here, only serves to complicate, not make things simple, especially in order to avoid downtime, two very orthogonal things.
> I don't have any downtime except when I need to restart services
So... you have downtime then.
(Also, you should be rebooting regularly to get kernel security fixes).
> not sure why you'd need Kubernetes at all here
To get HA, which is what we are talking about.
> only serves to complicate
Yes, high-availability systems are complex. This is why I am saying it's not really feasible for a homelabber, unless we are k8s enthusiasts I think the right approach is to tolerate downtime.
I run my stuff in a local k8s cluster and you are correct, most stuff runs as replica 1. DBs actually don't because CNPG and mariadb operator make HA setups very easy.
That being said, the downtime is still lower than on a traditional server
I don't think RPi is the gold standard nor is Chinese production that strongly correlated with poor SW support?
Raspberry Pi usually requires customisation from the distro. This is mitigated by the fact that many distros have done that customisation but the platform itself is not well-designed for SW support.
Meanwhile many Allwinner and Rockchip platforms have great mainline support. While Qualcomm is apparently moving in the right direction but historically there have been lots of Qualcomm SBCs where the software support is just a BSP tarball on a fixed Linux kernel.
So yeah I do agree with your conclusion but it's not as simple as "RPi has the best software support and don't buy Chinese". You have to look into it on a case by case basis.
If your benchmarks are fast enough to run in pre-commit you might not need a time series analysis. Maybe you can just run an intensive A/B test between HEAD and HEAD^.
You can't just set a threshold coz your environment will drift but if you figure out the number of iterations needed to achieve statistical significance for the magnitude of changes you're trying to catch, then you might be able to just run a before/after then do a bootstrap [0] comparison to evaluate probability of a change.
If you've had the problem it solves you don't really need an explanation beyond "Change Detection for Continuous Performance Engineering" I think.
Basically if I'm reading it correctly the problem is you want to automate detection of performance regressions. You can't afford to do continuous A/B tests. So instead you run your benchmarks continuously at HEAD producing a time series of scores.
This does the statistical analysis to identify if your scores are degrading. When they degrade it gives you a statistical analysis of the location and magnitude of the (so something like "mean score dropped by 5% at p=0.05 between commits X and Y").
Basically if anyone has ever proposed "performance tests" ("we'll run the benchmark and fail CI if it scores less than X!") you usually need to be pretty skeptical (it's normally impossible to find an X high enough to detect issues but low enough to avoid constant flakes), but with fancy tools like this you can say "no to performance tests, but here's a way to do perf analysis in CI".
IME it's still tricky to get these things working nicely, it always requires a bit of tuning and you are gonna be a bit out of your depth with the maths (if you understood the inferential statistics properly you would already have written a tool like this yourself). But they're fundamentally a good idea if you really really care about perf IMO.
I do not have any reason to doubt that people are doing insider trading. The US admin is obviously corrupt and the Iran attacks are the most abject symptom of its corruption so far.
But you can't just put out a bunch of completely isolated observations with zero analysis and say "that looks like insider trading". There is nothing at all in here that presents an argument for that claim.
I am a daily Guardian reader but I stopped paying for it coz there are so many articles like this that are just complete fucking trash. Because I am the target audience (leftist Euro who can easily get riled up by topics like this) it pisses me off when I feel I'm being manipulated.
reply