they were definitely totalitarian, slightly different mix of ideology. Fascist is a fairly good description here, it describes close collaboration of government with corporations to advance national goals. US had somewhat fascist tendencies for a long time now.
I don’t get that, the use of these books was instrumental and necessary for the success of the training run. The expected value of these training runs is high as the build out of 100 billion+ infrastructure demonstrates, so the book publishers should at a minimum be paid a licensing fee, a small fraction of every inference run revenue or whatever they decide. The fact that authors and publishers didn’t get any say under what conditions their intellectual property can be used is pretty outrageous.
The conclusion was they suffered no legal harm, in that their interests such as their continued publishing of books was not affected by LLMs; no one is using AI to compete with publishers, if anything "authors" might very well use those same publishers to get their generated books on shelves.
So pretty much the same as the Authors Guild, Inc. v. Google, Inc. case ruling it as fair use as a transformative work. I mean if indexing the worlds books is transformative then a neural net run on them certainly is a transformative work and fair use.
In fact 5g and all previous standards have a provision for lawful intercept. So your domestic intelligence service and police can always turn it into a listening device.
Curious how this relates to what lean4 is doing, I guess in lean's case some of the data structures are special cased (Array) and there is no easy way to implement such data structures yourself
At some point we will be so tired of distinguishing between AI generated content and human content that we will stop using the Internet and it will be left to bots.
It took cerebras less than a billion to get to where they are now, CPUs are not that hard. You would probably be able to reverse engineer them for ~100 million
I mean in a normal math curriculum you would define only the multiplicative inverse and then there is a separate way to define fraction, if you start out with certain rings. It is kind of surprising to me that they did a lazy definition of division.
One other thing I've observed is that Claude fares much better in a well engineered pre-existing codebase. It adopts to most of the style and has plenty of "positive" examples to follow. It also benefits from the existing test infrastructure. It will still tend to go in infinite loops or introduce bugs and then oscillate between them, but I've found it to be scarily efficient at implement medium sized features in complicated codebases.
Yes, that too, but this particular project was an ancient C++ codebase with extremely tight coupling, manual memory management and very little abstraction.
reply