I think the solution is to launder all research papers through LLMs so the papers are no longer copyrightable, and let the rich journal owners fight with the LLM owners.
I've read lots about content defined chunking and recently heard about monoidal hashing. I haven't tried it yet, but monoidal hashing reads like it would be all around better, does anyone know why or why not?
At the 2018(?) ICFP, I sat between John Wiegley and Conal Elliot.
They talked about expressing and solving a programming problem in category theory, and then mapping the solution into whatever programming language their employer was using.
From what they said, they were having great success producing efficient and effective solutions following this process.
I decided to look for other cases where this process worked.
I found several, but one off the top of my head is high dimensional analysis, where t-SNE was doing okay, and a group decided to start with CT and try to build something better, and produced UMAP, which is much better.
In short, this does work, and you can find much better solutions this way.
I suspect the H pattern they mean is where the gear shift is on the steering column, not on the floor.
I long ago owned a 1945 Dodge truck with that shifting setup.
Nope, but I appreciate the effort. I threw the H-pattern reference in there without thinking about it too much mostly to differentiate the reference from situations where you're shifting but it's not with a lever.
(I've been stuck on planes for 20 hours with little sleep, so ignore it if it doesn't make too much sense lol)
reply