> if the agent has difficulty generating Haskell code then that suggests agents aren't capable of reliably generalizing beyond their training data.
doesn't that apply to flesh-and-bone developers? ask someone who's only working in python to implement their current project in haskell and I'm not so sure you'll get very satisfying results.
> doesn't that apply to flesh-and-bone developers?
No, it does not. If you have a developer that knows C++, Java, Haskell, etc. and you ask that developer to re-implement something from one language to another the result will be good. That is because a developer knows how to generalize from one language (e.g. C++) and then write something concrete in the other (e.g. Haskell).
One language in the same category to another in the same category, yes. "Category" here being something roughly like "scripting, compiled imperative, functional". However my experience is that if you want to translate to another category and the target developer has no experience in it, you can expect very bad results. C++ to Haskell is among the most pessimal such translations. You end up with the writing X in Y problem.
Your argument fails where it equates someone who only codes in one language to an LLM who is usually trained in many languages.
In my experience, a software engineer knows how to program and has experience in multiple languages. Someone with that level of experience tends to pick up new languages very quickly because they can apply the same abstract concepts and algorithms.
If an LLM that has a similar (or broader) data set of languages cannot generalise to an unknown language, then it stands to reason that it is indeed only capable of reproducing what’s already in its training data.
The hard bit of programming has never been knowing the symbols to tell the computer what to do. It is more difficult to use a completely unknown language, sure, but the paradigms and problem solving approaches are identical and thats the actual work, not writing the correct words.
Saying that the paradigms of Python and Haskell are the same makes it sound like you don’t know either or both of those languages. They are not just syntactically different. The paradigms literally are different. Python is a high level duck typed oo scripting language and Haskell is a non-oo strongly typed functional programming language. They’re extremely far apart.
Other people have replied but to clarify my point, while two languages may operate with a focus on two seperate paradigms, the actual paradigms do not vary. OOP is still OOP whatever language you use, same for functional et al. Sure some languages are geared towards being used in certain ways, some very much so, but if you know the actual ways the language is largely irrelevant.
They are different, but on some fundamental level when you're writing code you're expressing an idea and it is still the same. The same way photograph and drawing of a cat are obviously different and they're made in vastly different ways, but still they're representations of a cat. It's all lambda calculus, turing machines, ram machines, combinator logic, posts' dominoes etc., etc. in the end.
They’re both turing complete, which make them equivalent.
Code is a description of a solution, which can be executed by a computer. You have the inputs and the outputs (we usually split the former into arguments amd environment, and we split the latter into side effects and return values). Python and Haskell are just different towers of abstraction built on the same computation land. The code may not be the same, but the relation between inputs and outputs does not change if they solve the same problem.
> The paradigms literally are different. […] They’re extremely far apart.
And yet, you can write pure-functional thunked streams in Python (and have the static type-checker enforce strong type checking), and high-level duck-typed OO with runtime polymorphism in Haskell.
The hardest part is getting a proper sum type going in Python, but ducktyping comes to the rescue. You can write `MyType = ConstructA | ConstructB | ConstructC` where each ConstructX type has a field like `discriminant: Literal[MyTypeDiscrim.A]`, but that's messy. (Technically, you can use the type itself as a discriminant, but that means you have to worry about subclasses; you can fix that by introducing an invariance constraint, or by forbidding subclasses….) It shouldn't be too hard to write a library to deal with this nicely, but I haven't found one. (https://pypi.org/project/pydantic-discriminated/ isn't quite the same thing, and its comparison table claims that everything else is worse.)
It allows you to download Patreon-exclusive videos for e.g. viewing it on a flight, similar to how Youtube does it. It's literally the only reason I have it installed as an app.
May be it's just me, but the very first example (contacts form) looks better (easier to read) on the left than text in empty space on the right (which is supposed to be the good design)...
It's not just you, I didn't even open the link and know exactly which two examples you're talking about because I left this same comment on HN a while ago.
So much of modern design is fashion yet the designers pretend it isn't. Like it's some scientifically provable truth or axiom that faint lines between list items is "bad".
But not all of us are working in the open air office... Brightly lit office after 5pm in winter (ie after sunset) doesn't mean dark mode is the best option.
No, you change your OS preference and then most websites that are coded correctly will follow. I do it by sunrise and sunset but have a key binding to override it.
A person that cares for coding will inevitably code more then 9-5 and consequently get familiar with new syntax
A person that invests time into their language knowledge will not have issues handling new syntax because they spend as much time as necessary to get familiar with the new syntax
So the criteria is being a 9-5 who doesn't particularly care about coding and doesn't invest time into their language knowledge
Not GP, but I assume the suggestion is that it's difficult to stay abreast of new developments within the constraints of a typical work day. Especially if your job utilises older technologies, as most do.
If a customer's balance is under $1 at the end of the month, we delay charging them for up to 60 days and send email reminders. If it's still under $1 after 60 days, we charge at least $0.50 and credit the difference (after fees) to their account for future use.
Funny nitpick, this definition applies to most drones, because most drones sold are x-copters and do not have wings, they always take-off and land vertically.
Yeah it's a strange term because it probably originated relative to fixed wing planes. Ie a VTOL plane. But now multicopters are the predominant species so VTOL can sound redundant to most drone builders today.
We could extend the argument more. Why build a self driving vehicle at all? Build a humanoid robot to drive the car for you! The argument that computer systems can outcompete human drivers, without using lidar, is at least reasonable, although not yet proven
(I didn't just want to just make sure - this is a stab)
doesn't that apply to flesh-and-bone developers? ask someone who's only working in python to implement their current project in haskell and I'm not so sure you'll get very satisfying results.
reply