Hacker Timesnew | past | comments | ask | show | jobs | submit | pklausler's commentslogin

CUDA Fortran was first released in 2007 and now has multiple implementations.

I don't think that BASIC was ever meant to be a better Fortran. Can you substantiate that claim?

> Along with the time sharing system came the new language which they decided to call BASIC. At first it was going to be a subset of Fortran but they decided that no subset of any existing language would be complete enough.

https://www.i-programmer.info/history/people/739-kemeny-a-ku...

> Kemeny and Kurtz realized that if they wanted to reach everyone on campus with their time-sharing vision, they needed to simplify the user interface. The popular programming languages at the time, FORTRAN and ALGOL, were "just too complicated," Kurtz recalled. "They were full of punctuation rules, the need for which was not completely obvious and therefore people werenʼt going to remember."

https://fas.dartmouth.edu/news/2024/11/remembering-computing...


The truth is not as strong as I had claimed. BASIC's expressions kinda resemble Fortran's, probably because that was what was lying around. It seems that an easier version of an existing language is what Kurtz wanted, but Kemeny was more interested in starting from scratch, which view Kurtz came around to. From Wikipedia (https://en.wikipedia.org/wiki/Dartmouth_BASIC):

When the topic of a simple language began to be considered seriously, Kemeny immediately suggested writing a new one. Kurtz was more interested in a cut-down version of FORTRAN or ALGOL.[14] But these languages had so many idiosyncrasies that Kurtz came to agree with Kemeny:

If we had corrected FORTRAN's ugly features, we would not have FORTRAN anymore. I reluctantly had to agree with John that, yes, a new language was needed.[15]


> First let’s accept the realities. The giant plagiarism machines have already stolen everything. Copyright is dead. Licenses are washed away in clean rooms. Mass surveillance and tracking are a feature, privacy is a bug. Everything is an “algorithm” optimised to exploit.

Suppose that I have discovered a novel algorithm that solves an important basic problem much more efficiently than current techniques do. How do I hide it from the web scrapers that will steal it if I put it on GitHub or elsewhere? Should I just write it up as a paper and be content with citations and minor glory? Or should I capture AI search results today for "write me code that does X", put my new code up under a restrictive license, capture search results a day later, demonstrate that an AI scraper has acquired the algorithm in violation of the license, and seek damages?


Isaac Newton tried to keep his Calculus secret and almost got scooped by Leibniz. IMO trying to hoard knowledge is not a great look; we all (mostly) have a sense that knowledge belongs to humanity as a whole.

I think what you're looking for is patents. I've said it before, but I think patents are the only protection left for innovative software and "the little guy." It always was, really, but it's blindingly apparent today.

Unfortunately, that would be considered heresy on forums like HN, and people will continue to rail against AI and whatever it's causing and patents, instead of realizing that one is the only available leverage against the other.


I have a few patents, including one for a novel machine instruction, and I recall the attorney telling me that one cannot patent mathematics, only methods and systems.

That's true but generally that applies to purely abstract mathematics. If the mathematics is truly abstract, no form of IP anywhere would protect it. That has always been (rightfully IMO) the realm of scientific publications.

Otherwise it's straightforward to say that the mathematics is being applied to achieve a practical goal via execution on a computer. (You'll see the term non-transitory computer-readable media" a lot in claims.) You now have a method and system. Now, caselaw frequently changes things, like the "Alice" decision in the US made it much harder to just patent things done "on a computer" but the underlying principle holds.

I'd also guess if your approach makes something faster or cheaper, it should be possible to show it is non-abstract, because resources like time and costs are not abstract quantities.

Standard disclaimer: I'm not a lawyer! I've just worked with patents extensively.


Is there a production compiler out there that doesn't use recursive descent, preferably constructed from combinators? Table-driven parsers seem now to be a "tell" of an old compiler or a hobby project.

Some people appreciate that an LR/LALR parser generator can prove non-ambiguity and linear time parse-ability of a grammar. A couple of examples are the creator of the Oil shell, and one of the guys responsible for Rust.

It does make me wonder though about why grammars have to be so complicated that such high-powered tools are needed. Isn't the gist of LR/LALR that the states of an automaton that can parse CFGs can be serialised to strings, and the set of those strings forms a regular language? Once you have that, many desirable "infinitary" properties of a parsing automaton can be automatically checked in finite time. LR and LALR fall out of this, in some way.


Production compilers must have robust error recovery and great error messages, and those are pretty straightforward in recursive descent, even if ad hoc.

Oh, I was talking much more about how you can first learn how to write a compiler. I wasn't talking about how you write a production industry-strength compiler.

Btw, I mentioned parser combinators: those are basically just a front-end. Similar to regular expressions. The implementation can be all kinds of things, eg could be recursive descent or a table or backtracking or whatever. (Even finite automata, if your combinators are suitably restricted.)


I used a small custom parser combinator library to parse Fortran from raw characters (since tokenization is so context-dependent), and it's worked well.

The thing about LR parsers is that since it is parsing bottom-up, you have no idea what larger syntactic structure is being built, so error recovery is ugly, and giving the user a sensible error message is a fool’s errand.

In the end, all the hard work in a compiler is in the back-end optimization phases. Put your mental energy there.


I have worked on compilers (mostly) for high-performance computing for over 40 years, writing every part of a production compiler twice or more. Optimization and code generation and register allocation/scheduling are definitely the most fun -- but the hardest work is in parsing and semantics, where "hardest" means it takes the most work to get things right for the language and to deal with user errors in the most graceful and informative manner. This is especially true for badly specified legacy languages like Fortran.

The free market ensures that bridges stay up, because the bridge-makers don't want to get sued by people who have died in bridge collapses.

That is definitely not the free market at play. It's legislative body at play.

Engineers (real ones, not software) face consequences when their work falls apart prematurely. Doubly so when it kills someone. They lose their job, their license, and they can never work in the field again.

That's why it's rare for buildings to collapse. But software collapsing is just another Monday. At best the software firm will get fined when they kill someone, but the ICs will never be held responsible.


This only works when the barrier of entry to sue is low enough to be done and when the law is applied impartially without corruption with sanctions meaningful enough , potentially company-ending, to discourage them.

At the moment you remove one of these factors, free market becomes dangerous for the people living in it.


I'm going to assume this is Poe's Law at work?

The logical combinators that I know all have definitions in the untyped lambda calculus. Is there a typed variant of logical combinators?

Most of them have simple types and are easy to define in ML or Haskell.

   I : a -> a
   I x = x

   K : a -> b -> a
   K x y = x

   W : (a -> a -> b) -> a -> b
   W f x = f x x

   C : (a -> b -> c) -> b -> a -> c
   C f x y = f y x

   B : (a -> b) -> (c -> a) -> c -> b
   B f g x = f (g x)

   Q : (a -> b) -> (b -> c) -> a -> c
   Q f g x = g (f x)
There are, however, combinators that do self-application (crucially used in the definition of the Y combinator) and these do not have simple types.

If you like Leuchtterm, you'll love Quo Vadis Habana notebooks, if you can find them in stock.

The Habanas don't seem to have page numbers, which is one of the things I particular like about the 1917s.

Enjoy!

"The system worked yesterday, so it should have worked forever."


The second paragraph describes the motivation. I encourage you to read the paper.


This would also mean that we should design new programming languages out of sight of LLMs in case we need to hide code from them.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: