Agreed, but for some reason the majority of folks don't care about these externalities at all.
I see the externalities and the harm they are and may cause, and at the same time I find it increasingly difficult to avoid using LLMs as there is personal value to be extracted. Further, so many others are using LLMs to pump their productivity numbers (reality may differ and time will tell) its hard to keep up without using LLMs.
That is how the teachers described it to us (the testing software being adaptive). As far as I know, we only use the diagnostic software. I wasn't aware of the existence of a learning version of this software.
My child's school is the same, they use the diagnostic software. My question was more toward wondering if the diagnostic is as anemic as the learning version in terms of being adaptive.
Chicago does have rules for timely show removal on sidewalks. In practice I have never heard of anyone receiving a fine even when the walk in front of a property remains uncleared for weeks on end. There is essentially little to no enforcement.
While I think this is good advice, the fact that it's true feels backward to me. "We have a legal or contractual obligation to be less secure than we otherwise would be." Just seems silly.
Welcome to the reality of most of the "information security" business, which is mostly just compliance by checkbox. A significant proportion of encrypted Internet traffic that is transiting government agencies or major enterprises gets decrypted in flight for inspection, literally inserting a black-box with privileged MITM capabilities into otherwise secure protocols, purely for the purpose of checking a compliance box, and that's not even the worst sin.
There's no insecurity like compliant cybersecurity :)
Could make the same argument for SQLite, the threshold is lower, but similarly you can pretty far with it. Then decide what's next, once you're out growing it.
As in the ranking/mental model increasingly being used by management in upper market organizations.
A Coding copilot subscription paired with a competent developer dramatically speeds up product and feature delivery, and also significantly upskills less competent developers.
That said, truly competent developers are few and far between, and the fact that developers in (eg.) Durham or remote are demanding a SF circa 2023 base makes the math to offshore more cost effective - even if the delivered quality is subpar (which isn't neccesarily true), it's good enough to release, and can be refactored at a later date.
What differentiates a "competent" developer from an "average" developer is the learning mindset. Plenty of people on HN kvetch about being forced to learn K8s, Golang, Cloud Primitives, Prompt Engineering, etc or not working in a hub, and then bemoan the job market.
If we are paying you IB Associate level salaries with a fraction of the pedigree and vetting needed to get those roles, upskilling is the least you can do.
We aren't paying mid 6 figure TC for a code monkey - at that point we may as well entirely use AI and an associate at Infosys - we are paying for critical and abstract thinking.
As such, AI in the hands of a truly competent engineer is legitimately transformative.
Thnks for sharing. Your process is something I have been trying to be more deliberate about, as I value and find happiness by some similar things and I appreciate how you describe it.
I see the externalities and the harm they are and may cause, and at the same time I find it increasingly difficult to avoid using LLMs as there is personal value to be extracted. Further, so many others are using LLMs to pump their productivity numbers (reality may differ and time will tell) its hard to keep up without using LLMs.
reply