> It is with great joy that I have accepted the invitation to write you a personal letter. Even in your scientific writings your presence as a person is very strong, which is unusual for a mathematician. The traces of your personal life and your dramas extend beyond the limits of your own personal circumstances to concern us all ...
>
> ...
>
> Perhaps the greatest catastrophe of anti-scientific computationalism can be seen in the recent theory of "The End of Theories". In a series of widely quoted articles, informaticians or managers of very large databases explain that: "Correlation supersedes causation, and science can advance even without coherent models, unified theories". In short, networked computers, bringing to light very extended correlations in huge databases, make it possible to predict and act, without the need to "understand": scientific intelligibility is an uncertain luxury that is subjective and outdated, and theories are fallible proposals. Data, especially in large quantities – tera-terabytes, Big Data – is objective, is a new form of the absolute, is individually exact, expressed in digits. Thus, they argue, the larger that databases become, the more that statistical regularities, brought to light by computers, can govern us without the need to understand the meaning of the correlations, to interpret them, and without the need for theories about them, for interpretations.
>
> Fortunately, mathematics allows us to demonstrate the absurdity of these claims: Cristian Calude and I have written an article about this. Precisely the immensity of data involved has allowed us to apply the theorems of Ramsey and Van der Waerden. These make it possible to show that, given any "regularity", or any correlation between sets of numbers, you can find a number p large enough, such that every set with at least p elements contains a regularity (or a correlation between numbers) with the same structure. Now, since this applies to every sufficiently large set (with at least p elements), this also applies when it is generated … by a random process. Indeed, we observe, almost all sets of fairly large numbers are algorithmically random (one can give a mathematical definition of them, in terms of incompressibility), i.e., the percentage of non-random tends to 0 as p goes to infinity. So, if you observe regularities in increasingly large databases, it is increasingly likely that the inserted data are due to chance, in other words are perfectly without meaning and do not allow prediction nor action.
I did a few days of AoC in 2020 in λProlog (as a non-expert in the language), using the Elpi implementation. It provides a decent source of relatively digestable toy examples: https://github.com/shonfeder/aoc-2020
(Caveat that I don't claim to be a λProlog or expert.)
All examples showcase the typing discipline that is novel relative to Prolog, and towards day 10, use of the lambda binders, hereditary harrop formulas, and higher order niceness shows up.
This will soon be happening with our parents' social security checks, our friend's cancer treatment plan, our international flights logistics, our ISPs routing configurations, ...
Search should be a public service, open and transparent, funded by tax revenue, and maintained for the public good. It is too important a service these days to leave it up to profiteers (who have repeatedly demonstrated they are not responsible or responsive stewards of the public good).
This is great. If you think that the phenomena of human-like text generation evinces human-like intelligence, then this should be taken to evince that the systems likely have dementia. https://en.wikipedia.org/wiki/Montreal_Cognitive_Assessment
Imagine if I asked you to draw as pixels and operate a clock via html or create a jpeg with a pencil and paper and have it be accurate.. I suspect your handcoded work to be off by an order of magnitutde compared
I think the big thing (potentially, for me) is the ability to postpone conflict resolution during a rebase. That can be quite painful in regular old git, but git-mediate helps make that less painful in practice in my particular situation and workflow.
We'll see once better non-cli UX appears. I'm low-key excited for what could be possible in this space.
I am excited too! It is probably too much to hope, but I nonetheless am hoping that magit gets a jj backend before I have enough motivation or need to learn a new tool to do the same old stuff :D
>
> It is with great joy that I have accepted the invitation to write you a personal letter. Even in your scientific writings your presence as a person is very strong, which is unusual for a mathematician. The traces of your personal life and your dramas extend beyond the limits of your own personal circumstances to concern us all ...
>
> ...
>
> Perhaps the greatest catastrophe of anti-scientific computationalism can be seen in the recent theory of "The End of Theories". In a series of widely quoted articles, informaticians or managers of very large databases explain that: "Correlation supersedes causation, and science can advance even without coherent models, unified theories". In short, networked computers, bringing to light very extended correlations in huge databases, make it possible to predict and act, without the need to "understand": scientific intelligibility is an uncertain luxury that is subjective and outdated, and theories are fallible proposals. Data, especially in large quantities – tera-terabytes, Big Data – is objective, is a new form of the absolute, is individually exact, expressed in digits. Thus, they argue, the larger that databases become, the more that statistical regularities, brought to light by computers, can govern us without the need to understand the meaning of the correlations, to interpret them, and without the need for theories about them, for interpretations.
>
> Fortunately, mathematics allows us to demonstrate the absurdity of these claims: Cristian Calude and I have written an article about this. Precisely the immensity of data involved has allowed us to apply the theorems of Ramsey and Van der Waerden. These make it possible to show that, given any "regularity", or any correlation between sets of numbers, you can find a number p large enough, such that every set with at least p elements contains a regularity (or a correlation between numbers) with the same structure. Now, since this applies to every sufficiently large set (with at least p elements), this also applies when it is generated … by a random process. Indeed, we observe, almost all sets of fairly large numbers are algorithmically random (one can give a mathematical definition of them, in terms of incompressibility), i.e., the percentage of non-random tends to 0 as p goes to infinity. So, if you observe regularities in increasingly large databases, it is increasingly likely that the inserted data are due to chance, in other words are perfectly without meaning and do not allow prediction nor action.
>
> ...
reply