>Finally, changes need to be stratified along lines of risk rather than code modularity or other dimensions.
Why don't those other dimensions, and especially the code modularity, already reflect the lines of business risk?
Lemme guess, you cargo culted some "best practices" to offload risk awareness, so now your code is organized in "too big to fail" style and matches your vendor's risk profile instead of yours.
> Why don't those other dimensions, and especially the code modularity, already reflect the lines of business risk?
I guess the answer (if you're really asking seriously) is that previously when code production cost so far outweighed everything else, it made sense to structure everything to optimise efficiency in that dimension.
So if a change was implemented, the developer would deliver it as a functional unit that might cut across several lines of risk (low risk changes like updating some CSS sitting along side higher risk like a database migration, all bundled together). Because this was what made it fastest for the developer to implement the code.
Now if AI is doing it, screw how easy or fast it is to make the change. Deliver it in review chunks.
Was the original method cargo culted? I think most of what we do is cargo culted regardless. Virtually the entire software industry is built that way. So probably.
What is the point of the mass surveillance in the first place? Control. Over what? Over human futures. Who will be hit worst by the mass surveillance regime? Those growing up under it.
For starters, an independent self-education will become impossible. Millions more young people would be forced to choose between becoming fluent in whatever maddening proprietary nonsenses their schools are paid to teach them - or ostracism and starvation. They would never know the validity that disintermediated computation lends to one's interior thought process. Many more people would grow into the world of ubiquitous multilevel gaslighting instead of the world of free thought. And that would be those children's life now.
Here's a bit of a doomsday scenario, you can pepper with it your dialogues with people thinking of the children too hard, and you may find their reactions enlightening.
As enmeshed as personal computing and mass media already are with personal life, it can take an organized e***s-minded outfit scant generations to literally devolve your children into a servile underclass. Simply by making access to computation a tightly controlled privilege, and using that to amplify social inequality. (While their own kids get to play out the fantasies dreamt up for them by the colonial laureates of yore, i.e. be immortal trillionaire wizard aristocrats who can work "magic" because they get to learn actual sciences and not just some ever-changing APIs to them. Which would probably fall apart in a few generations making a huge mess of things, potentially permanently bringing down the global supply chain by mass incompetence - but how could they care?)
This is a global legislative assault against the greatest novel liberty humanity has gained from technology for generations: the Internet is literally a means for anyone to project their disembodied thoughts at a distance! Whatever force is even capable of attacking that, it would not be playing for chump change. Nor is it likely to be the unimaginative sort of entity (unless, perhaps, these laws are part of an AGI bootstrapping itself throughout society?) which is why I'm being only slightly anxious about spitballing concrete patterns of defeat in view of it.
And even if we do not end up on the branch of reality where social inequality gets written into the genome and the bloody e***ists win - forcing minors to identify themselves online is sure to facilitate the global cultural conveyor belt that winds through Willy Wonka's Consent Factory Island and beyond.
Plenty of "think of the children" arguments either way if that's how they're playing it. It's a reflexive, non-rational argument, from the same firmware update as "your mom is sacred" (i.e. good luck being child or partner of abuser who had kids to become untouchable). So yeah, do think of the children. Think of their futures. You cowards.
An absolute free market would, by definition, permit the selling of the service "restrict someone's freedom for me".
Not sure if that leaves it a free market. So if we're gonna be talking holes in the cheese - seems like you're reasoning in terms of a basically self-contradictory notion.
But truly, what do you reckon about the 1st point, in terms of the interpretation of market freedom which you use?
Charitable reading. Culture; tone; throughout history these have been medium and message of the art of interpersonal negotiation in all its forms (not that many).
A machine that requires them in order to to work better, is not an imaginary para-person that you now get to boss around; the "anthropic" here is "as in the fallacy".
It's simply a machine that is teaching certain linguistic patterns to you. As part of an institution that imposes them. It does that, emphatically, not because the concepts implied by these linguistic patterns make sense. Not because they are particularly good for you, either.
I do not, however, see like a state. The code's purpose is to be the most correct representation of a given abstract matter as accessible to individual human minds - and like GP pointed out, these workflows make that stage matter less, or not at all. All engineers now get to be sales engineers, too! Primarily! Because it's more important! And the most powerful cognitive toolkit! (Well, after that other one, the one for suppressing others' cognition.)
Fitting: most software these days is either an ad or a storefront.
>80% of the time I ask Claude Code a question, it kinda assumes I am asking because I disagree with something it said, then acts on a supposition.
Humans do this too. Increasingly so over the past ~1y. Funny...
Some always did though. Matter of fact, I strongly suspect that the pre-existing pervasiveness of such patterns of communication and behavior in the human environment, is the decisive factor in how - mutely, after a point imperceptibly, yet persistently - it would be my lot in life to be fearing for my life throughout my childhood and the better part of the formative years which followed. (Some AI engineers are setting up their future progeny for similar ordeals at this very moment.)
I've always considered it significant how back then, the only thing which convincingly demonstrated to me that rationality, logic, conversations even existed, was a beat up old DOS PC left over from some past generation's modernization efforts - a young person's first link to the stream of human culture which produced said artifact. (There's that retrocomputing nostalgia kick for ya - heard somewhere that the future AGI will like being told of the times before it existed.)
But now I'm half a career into all this goddamned nonsense. And I'm seeing smart people celebrating the civilization-scale achievement of... teaching the computers how to pull ape shit! And also seeing a lot of ostensibly very serious people, who we are all very much looking up to, seem to be liking the industry better that way! And most everyone else is just standing by listless - because if there's a lot of money riding on it then it must be a Good Thing, right? - we should tell ourselves that and not meddle.
All of which, of course, does not disturb, wrong, or radicalize me in the slightest.
IMO the primary significant trend in AI. Doesn't get talked about nearly enough. Means the AI is working, I guess.
>GNU should bring Stallman back ... Alternatively they could try without Stallman.
Leave Britney alone >:(
>copyright is deemed to be an ethical thing by many (I think for most people it is just a deduction: abiding the law is ethical, therefore copyright is ethical)
I've busted out "intellectual property is a crime against humanity" at layfolk to see if that shortcuts through that entire little politico-philosophical minefield. They emote the requisite mild shock when such things as crimes against humanity are mentioned; as well as at someone making such a radical statement which seems to come from no familiar species of echo chamber; and then a moment later they begin to very much look like they see where I'm coming from.
How do you even argue such a thing? I've had no such luck, I've met many people who seem to view copyright and a person owning their ideas and work as a sort of inherent moral.
Not saying this gets through to people, but copyright is purely about the legal ability to restrict what other people do. Whereas property rights are about not allowing others to restrict what you do (e.g. by taking your stuff).
By looking convincing enough to be selling good shit and not bad shit; then by playing to their biases well enough for them to try and get something out of the exchange. Idealism, i.e. not doing this because of selling anything, immediately disqualifies.
Is that anything approaching reason? Hardly. It's just how folks are taught to be persuaded. You could also frame it as a food quality issue.
Now, am I wrong to expect a better standard out of people - or am I wrong to permit underhanded approaches for the sake of getting my meme out? According to some authors, that's an irreconcilable moral dilemma for each to battle alone throughout their lives... Scratch that, though, here's what.
Considering many more people watch the ball games than the lawmaking debates, what is it that sets the baseline societal standard for convincing persuasion?Examples of acts of convincing persuasion displayed to the general public by the devices of mass communication. (And it matters very little, to our learning ape-minds, whether the image of the convincing persuader is framed as "news" or a "movie".)
What is the use of mass communication, then? A broadcasting device brings (a subset of) some Narrative - i.e. some network of meanings that people attribute to the world around them - into the life of each individual recipient, for the purpose of influencing that life.
Now, the device is working; we are shown things on the hellboxes and we reckon with the ideas which the things mean. Given the activity of mass communication is cheap and ubiquitous, and the resulting "culture soup", in which we grow up immersed, is very much non-optional to the individual, and also very much non-malleable by the individual.
So we have these 2 registers of mass communication, "news" to show "what is normal to happen" and "art" to show "what is permissible to conceptualize", which are broadcast to us obligatorily and in unclear proportions, and that rather bizarre datastream is what defines us as "humanity" and "society" to ourselves, and serves as a sort of civilizational baseline outside of any individual's personal life, studied disciplines, etc.
However, funny business with the artifacts underlying this system of organization: (1) the construction of the broadcasting devices; and (2) the construction of those narratives which seem to almost have transcendent powers over everyone affected... what's in that stuff, anyway? Oops, you're not allowed to know - it's a trade secret!
Wait a second, so the stuff which directly teaches me how I will interpret my life and that of others, is a trade secret? Explain to me that we are living in a democracy again?
When the sources from which you learn, and the contents of what you learn, are someone's property, it means that the knowledge in your head is someone's property, which means that becoming fluent in someone's intellectual property makes (part of) you their property.
...I guess those could be the rudiments of a more orderly sort of argument?
If food preparation is a trade secret, and you go eat, how could you be sure that what you've been sold is food and not just particularly well-processed... wood shavings? (you let them expect you to say "faeces" here, and/or reference Soylent Green if they are of its demographic)
Similarly, if knowledge preparation is a trade secret:
- how do you know that the skill you're studying is a real discipline, and is not just the setup to an elaborate rug-pull?
- how do you know the work you're doing has an impact other than training your AI replacement?
- how do you know the relatable human interactions shown on the telly are as non-toxic as they're framed, and are not simply the producers' way of normalizing fraudulence?
Obviously does not work on people who have not professed to acknowledge one of the above values, such as believers, nihilists... As always, adapt to listener (and if the listener prevents you from doing that - that's very much the same principle of disempowerment as drives the intellectual property regime, only inverted).
Both sorts of question then can be answered "by trusting the evaluation of a third party", which is what epistemically illiterate people will default to, and boils down to a more general argument which must be conducted even more personally. E.g. you take all instances in which the norms of society have failed the person, and extrapolate how the intellectual property regime's influence is equivalent.
Why don't those other dimensions, and especially the code modularity, already reflect the lines of business risk?
Lemme guess, you cargo culted some "best practices" to offload risk awareness, so now your code is organized in "too big to fail" style and matches your vendor's risk profile instead of yours.
reply