This article is a great read, I read it a while back and was blown away. I never gave any thought to the advantages of any language over any other, I always looked at it with the programmer's lens. Just as all modern programming languages are Turing Complete and therefore equivalent (preferences aside), I assumed that the same would be true for most spoken languages. Ithkuil and the like really show that there is a lot more room for variety than most people imagine.
To say that all languages are equivalent because they are Turing Complete is just not true. That means they can decide the same class of problems, it does not mean they can or do decide them in the same way. In particular some languages are inherently faster than others.
Not sure where I read it, but it was recently - languages that sound faster really just tend to have more syllables per concept, and all languages generally tend to have the same speed of communicated concept.
Whoever said that was not a competent programmer, discussing programming languages.
Research into programming languages says that lines of code written/day tends to be surprisingly consistent. But the amount of functionality embedded in that code varies widely depending on the language.
Faster languages are ones that take over basic mechanics for you (eg memory management), infer things for you (eg compare C++ to go), or offer more convenient abstractions (eg compare scripting languages to C or Java). The difference is usually an order of magnitude, but can be more in the right circumstances.
I read the same thing, the article I read also said that Chinese encodes more info per sound so it sounds like the speakers are speaking more slowly. English was somewhere in the middle.
Measuring time of computation is counting the number of primitive operations. I'd say a language is a collection of capabilities that play nicely. Given one language, we can create a second that is also RE by taking each instruction in the first and adding a nop, or calculate ack(10,10) and throws it away, or whatever. Any algorithm that the second language computes with at least one step runs in fewer primitive steps in the first language by design.
Beyond a theoretical consideration, in practice there is much more than preference providing the difference between languages. If they were the same, you shouldn't have any trouble implementing combinators in brainfuck.
I understand, I'm saying that this article suggests that there may be languages which enable us to solve entirely different problems.
Maybe the comparison of a classical computer vs. a quantum computer is more apt. There are many problems which they can both do, but for some a quantum computer is infinitely more efficient. For some linguistic problems, a certain language may be much more efficient.
In the same way Lisp or Haskell enable us to solve entirely different problems from writing directly in assembly perhaps, since you avoid having to deal with all the mundane details and can reason about your program at a much higher level?