In "The Information," James Gleick discusses a concept related to our current discourse. In the days when computers were merely an array of switching circuits, luminaries such as Claude Shannon believed that "thinking" could be captured in a structured format of logical representation.
However, even with formally composable languages like JavaScript, a semblance of unpredictability — akin to the "faerie logic" metaphor — still persists. Languages evolve over time; Python, for instance, with its various imports that constantly disrupt my code, serves as a good example. This is perhaps the reason behind the emergence of containers to ensure code consistency.
While some elements may be more "composable" than others, it appears increasingly unrealistic in today's world to encapsulate thought processes or interactions with systems within a rigid logical framework. Large Language Models (LLMs) will keep evolving and improving, making continual interaction with them unavoidable. The notion that we can pass a set of code or words through them once and expect a flawless result is simply illogical.
I firmly believe that any effective system should incorporate a robust user interaction component, regardless of the specific task or problem at hand.
It's not so much about formal logic, but general predictability.
even with formally composable languages like JavaScript, a semblance of unpredictability — akin to the "faerie logic" metaphor — still persists
And they're ridiculed for it, and as you state, we design around them or replace such systems entirely.
making continual interaction with them unavoidable
Technology is never unavoidable or "inevitable". We can choose not to use it, or when to use it.
The notion that we can pass a set of code or words through them once and expect a flawless result is simply illogical.
Yet that is what we expect when we put these systems into production use, especially when many proposed use cases are user-facing and subject to injection attacks.
Whether it be the writing of adcopy, the processing of loan applications, or generating code, mistakes in these tasks have very real consequences.
I don't disagree we can choose to use it or not, but my point was more meant to indicate that, if we want a good experience with LLMs, we have to continue to interact with them to achieve good results.
However, even with formally composable languages like JavaScript, a semblance of unpredictability — akin to the "faerie logic" metaphor — still persists. Languages evolve over time; Python, for instance, with its various imports that constantly disrupt my code, serves as a good example. This is perhaps the reason behind the emergence of containers to ensure code consistency.
While some elements may be more "composable" than others, it appears increasingly unrealistic in today's world to encapsulate thought processes or interactions with systems within a rigid logical framework. Large Language Models (LLMs) will keep evolving and improving, making continual interaction with them unavoidable. The notion that we can pass a set of code or words through them once and expect a flawless result is simply illogical.
I firmly believe that any effective system should incorporate a robust user interaction component, regardless of the specific task or problem at hand.