Hacker Timesnew | past | comments | ask | show | jobs | submit | bcye's commentslogin

This seems to only raise issues about taxes but none about the bureaucracy side?

Couldn’t you make a pseudo monorepo via git submodules?

I don't think there's a way to have that work in Claude Code for web, since each checkout there uses a custom GitHub access token scoped to a single repository.

GitHub tokens can span more than one repo or org if the account requesting has access to them. Is this supported on the non-web version?

(I was going to try claude again this weekend, but when I tried to login, got an error and am reminded how much down time I experience from Anthropic, arg...)


The non web version uses whatever credentials you have setup yourself, so it works just fine.

Is that host level or can I provide scoped tokens based on what I'm doing?

In other words, do Anthropic tools provide any affordances for this or is it something I have to manage externally?


I'm just talking about the version of Claude Code which runs in containers on their infrastructure here - they call it "Claude Code on the web" (terrible name) and you access it through their native apps or from https://claude.ai/code

That product only works against one GitHub repo at a time. The Claude Code you install and run locally doesn't have a GitHub attachment at all and can run against whatever you checkout yourself.

Here's an open feature request about it: https://github.com/anthropics/claude-code/issues/23627


Submodules are pain, use the dependency management systems for the languages in your monorepo.

This would be unfortunately a rather nuclear option due to the continent’s insane reliance on technology that breaks its unenforced laws.

How about not making these unenforced laws in the first place so that European companies could actually have a chance at competing? We're going to suffer the externalities of AI either way, but at least there would be a chance that a European company could be relevant.

The AI Act absolutely befuddled me. How could you release relatively strict regulation for a technology that isn't really being used yet and is in the early stages of development? How did they not foresee this kneecapping AI investment and development in Europe? If I were a tinfoil hat wearer I'd probably say that this was intentional sabotage, because this was such an obvious consequence.

Mistral is great, but they haven't kept up with Qwen (at least with Mistral Small 4). Leanstral seems interesting, so we'll have to see how it does.


Because the AI act was mostly written to address issues with ML products and services. It was mostly done before ChatGPT happened, so all the foundation model stuff got shoehorned in.

Speaking as someone who's been doing stats and ML for a while now, the AI act is pretty good. The compliance burden falls mostly on the companies big enough to handle it.

The foundation model parts are stupid though.


>Because the AI act was mostly written to address issues with ML products and services. It was mostly done before ChatGPT happened, so all the foundation model stuff got shoehorned in.

It's not an excuse. Anybody with half a working brain should've been able to tell that this was going to happen. You can't regulate a field in its infancy and expect it to ever function.

>The compliance burden falls mostly on the companies big enough to handle it.

You mean it falls on anyone that tries to compete with a model. There's a random 10^25 FLOPS compute rule in there. The B300 does 2500-3750 TFLOPS at fp16. 200 of these can hit that compute number in 6 months, which means that in a few years time pretty much every model is going to hit that.

And if somebody figures out fp8 training then it would only take 10 of these GPUs to hit it in 6 months.

The copyright rule and having to disclose what was trained on also means that it will be impossible to have enough training data for an EU model. And this even applies to people that make the model free and open weights.

I don't see how it is possible for any European AI model to compete. Even if these restrictions were lifted it would still push away investors because of the increased risk of stupid regulation.


> It's not an excuse. Anybody with half a working brain should've been able to tell that this was going to happen. You can't regulate a field in its infancy and expect it to ever function.

As I said, the core of the AI act was written about supervised ML, not generative ML, as generative ML wasn't as big a deal pre Chat GPT.

> You mean it falls on anyone that tries to compete with a model. There's a random 10^25 FLOPS compute rule in there. The B300 does 2500-3750 TFLOPS at fp16. 200 of these can hit that compute number in 6 months, which means that in a few years time pretty much every model is going to hit that.

As I also said, the foundation model stuff (including this flops thing) is incredibly stupid. I agree with you on this, but my point is that the core of the AI act was supposed to cover the ML systems built since approx 2010.

> The copyright rule and having to disclose what was trained on also means that it will be impossible to have enough training data for an EU model. And this even applies to people that make the model free and open weights.

Again, you're talking about generative stuff (makes sense given the absurdly misleading name now) whereas I'm talking about the original AI act, which I read well before ChatGPT happened.

The training data thing is a tradeoff, like copyright is far too invasive (IMO) and it's good to be able to use this information for other purposes. However, I personally would be super worried about an ML team that couldn't tell me what data went into their model. Like, the data is core to all ML/AI approaches so that lack of understanding would make me very sceptical of any performance claims.

Lets be real, the AI companies don't want to say what's in their models because of the rampant copyright infringement, not because of any technical incapability.


Weird product page, why would you put the number of member states and residents Europe has in your „feature“ section?


I think they are simply referring to analytical workloads.


Meta is so driven by it though that it alone holds more than 5 of the 10 largest GDPR fines.


I'm not sure how attacking Greenland would accomplish the goal of more European spending on US weapons.


If indeed this turns out to be a ruse, Greenland conquest would not be Trump's end game. It would be just a performative confrontation to get rid of NATO 1.0. Who is really ready to start WW3 over Greenland?

After NATO 1.0 is declared dead and burried, Trump might as well backpedal and start negotiating NATO 2.0. Which would be light on US military commitments and heavy on European arm purchase commitments. And he seems to believe (not unjustifiably - see Nord Stream sabotage) that the European leaders are spineless enough to accept a NATO 2.0 deal.

This will not be unlike Trump's thinking: "I'll build a wall and the Mexicans will pay for it".

Wild theory, yes. But we live in wild times, unfortunately.


Some, by working for companies (big tech) that have given little resistance to trump but rather funded his ball room, etc. Sadly, everyone quitting those companies would not really be a reasonable solution either, though there are more possible actions than that


This extension gives you more choice than denying or allowing everything though, you get granular choice automatically applied to all websites where it works


I think most people don’t want to give consent to any of this so a simple block list is enough.


Well it does change if you have more of a choice than reject all or allow all (without needing to go into complicated settings each time). Telemetry is not that unpopular - I'd like devs to fix bugs I encounter.


It seems the feature you are referencing was deprecated?

https://support.mozilla.org/en-US/kb/cookie-banner-reduction


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: