Hacker Timesnew | past | comments | ask | show | jobs | submitlogin
Femtolisp: A lightweight, robust, scheme-like Lisp implementation (github.com/jeffbezanson)
146 points by lelf on Jan 20, 2020 | hide | past | favorite | 60 comments


The parser of the Julia compiler is written in this dialect. It just goes to show how much power you have in a small package with lisps.


You mean, how much you can develop during a PhD on scientific computing with a known technical computing syntax and paradigm on top of a modular compiler infrastructure with an enthusiastic community?

(Not to detract from Julia devs but there were plenty of Lisp based contenders ready for scientific computing before so its useful to ask why Julia why now etc )

*edit: this was kinda snarky but I actually meant a compliment on the excellent strategic choices Julia devs made as compared to other scientific computing attempts


I'll go and say that, outside of the great work from the creators in the development and marketing of the language, the main difference between those Lisp based languages and Julia in terms of attracting people is the syntax. Julia focuses on being approachable to Python, R, Fortran and Matlab programmers (since those already "won" by the time Julia was created), and it wouldn't be able to convince any significant number of those to even check the language without looking like Python, R, Fortran and Matlab (even when S-expr is indeed more elegant and powerful). Julia base syntax is closer to how people write math, so it's more familiar even to people who are not dedicated programmers which were part of Julia's target audience.

And when people discover the lispiness in Julia which is a thin layer below the surface syntax (such as the metaprogramming system and multiple dispatch) they at least already gave the language a fairer chance. Other languages that borrowed from Lisp but didn't go, to different degrees, with s-expr like Scala and Clojure (plus R) also had success in the data science/ML area. Hopefully Nim also gets a foothold, and possibly Racket with optional infix syntax.


> Julia base syntax is closer to how people write math

As someone who is a Julia fan, but has a background from mathematics rather than computer science, I don't agree. Julia syntax resembles mathematics only on a very superficial typographic level that have little actual significance when reading or writing code.

Mathematical notation, unlike code, is fundamentally two-dimensional, with vertically written fractions, subscripts and superscripts, etc. Simple mathematical expressions are easy to read in both Julia and S-expression notation. More complicated expressions are hard to read in Julia (I sometimes have to write them out by hand on paper in order to understand them), while the same expressions written in properly indented S-expression syntax with a generous use of newlines are easy to read. They are somehow structurally more similar to standard 2-dimensional mathematical notation, despite the superficial infix vs prefix differences.

What attracted me in the first place to Common Lisp (my first programming language) as a person with no technical skills or experience was the friendly and intuitive syntax. It was the first programming language I saw that didn't look visually intimidating. These days I use Julia, but I use it for the awesome features, libraries and community, not for the syntax. I would love Julia even more if it used S-expressions.


The superficial typographic level is the difference, I was mostly talking about first impressions. Though you probably shouldn't really allow your expression to get to the point that it's hard to read in Julia (for example abusing broadcast to create overly complex one-liners). At this point you should probably break it in multiple variables and functions.

And as Julia gets more mature and popular, I wouldn't be surprised if more people notice the strength of it's ecosystem and start creating new language that targets it, just like Elixir and Erlang or all JVM machines (and something more than just reader macros like LispSyntax.jl, like having full editor support and exclusive features that makes it unique compared to Julia). A JuliaLisp could have an amazing interop with the main language, and serve as an alternative for those who prefer s-expr.


With MATLAB/Octave in particular I think the value isn't familiar syntax or "how people write math" (because even that is quite varied). It's more that the REPL and language are fast enough to make it a CLI calculator that allow for rapid prototyping and development/exploration of mathematical ideas.

It's one thing that Julia currently does "ok" with, the syntax is fine (albeit slightly more verbose and often more abstract), but the iteration speed gets fairly nasty when you're not doing big computations or need to rapidly iterate, in which the JIT can take awhile to warm up. There are things you can do, but it's still a lot slower for day to day work for a lot of MATLAB users.

At least in my opinion. I tried to switch to Julia from GNU Octave (from which I moved from MATLAB proper) for a lot of hobby controls/DSP work and it was just a pain to iterate and explore ideas with. Same with Python imo as well.


The trick to avoiding this problem is to keep Julia "warm" with a long running process. There are a few ways to do this - Jupyter, the Juno IDE, and vim/neovim with a terminal pane are all popular - but the most straightforward is to switch from `julia file.jl` to

```

julia

julia> include("file.jl") # when you want to reload, ctrl-c then arrow-up and enter to include again

```

This isn't so dissimilar to MATLAB in some respects: you don't restart MATLAB every time you want to re-run your code.


Better yet, use Revise.jl and do

julia> includet("file.jl") # Notice the final 't' in 'includet'

This will cause file.jl to get automatically reloaded whenever it is changed.


I've been using Julia for 4 years and had never heard of includet. Thanks!


You're welcome! Please note the "use Revise.jl" part. The includet function is not included in the base language, you need the Revise.jl package.


Just about every innovative idea in Lisp has been adopted by other languages now. Except for s-expr syntax. Draw your own conclusions.


The conclusion to draw is that, if a language has s-expression syntax, it is considered a lisp.


And people will do all the hard work of building a new language from scratch but conspicuously leave that feature out instead of using one of the many very good existing lisp or scheme implementations. Because the reality is that most programmers don’t want to code in s-exprs despite some significant benefits.

This is a very hard truth for the more aspbergery wing of lisp users to accept.


Clojure was a new language from scratch and uses s-expression-like syntax.

Generally, s-expressions is not a particular popular or needed tool. We've seen similar systems for code data&representations though - for example XML was used to represent code, too.


I think Dylan would also had a chance, if Apple kept with it for Newton, instead of turning its use into internal political wars.

Plenty of inside info on these comment threads.

https://hackertimes.com/item?id=21828622

https://hackertimes.com/item?id=8224469


>the main difference between those Lisp based languages and Julia in terms of attracting people is the syntax

Yes, and this is practically the only thing that I find wrong with Julia, besides the lack of the interactivity of CL.

It's like populism -- an inferior syntax choice taken just to attract the Pythonistas.


I agree with you that s-expr is a more powerful syntax (more extensible with less rules), but the social aspect of the syntax is just as important since programming languages target humans, which are just not rational beings. Over the decades many languages "won" against the Lisp family, including a much less mature Python, partially for aesthetic reasons. Even Math could have used a full prefix syntax and have less rules (no need to deal with operator precedence) but people still kept the infix syntax. And Julia, as a math and science focused language, having a syntax that better emulates the tool it covers is objectively a positive point.

And secondly, Lisp programmers are also human, and more power means more responsibility. When you have a language to create any language, people are tempted to do it, but a language is also a social construct, not just technical (a language only has value if many people talk it, which means documentation, community, support...), and making it so easy to make your own that you end up ignoring that aspect is the Lisp Curse. Modern languages actually see value in limiting the power, by either removing it completely (like Go), or hiding it/making deviations explicit (like Julia's surface syntax and macro identifier '@' which alerts the user that he is deviating from the "first class" language that everyone is supposed to know).


I guess you mean you prefer prefix rather than infix notation. Do you have any specific technical reasons for this?


I dislike operator precedence, and I also think that putting all the plus signs in between the addends is a waste time.

I like being able to do this:

  (+ 1 2 3 4 5)


Great news: you never need to use an infix operator in julia. All infix operators are also regular functions, so

    1 + 2 + 3 + 4 + 5 
is actually the same as

    +(1, 2, 3, 4, 5)
though there's still commas between the arguments.


Yeah, those commas are going to drive me crazy. LISP has ruined me.


I mostly got used to them, but I do enjoy playing around with https://github.com/swadey/LispSyntax.jl for more directly lispy sytanx in Julia.


>REPL mode Lisp Mode initialized.

>Lisp Mode

I wish every language had Lisp Mode, haha


Agreed. Not all languages are powerful enough to offer a lisp mode that'd be sufficiently lispy though. The whole point of LispSyntax.jl is that it's lisp syntax while still respecting julia semantics, which are basically just lisp semantics.

Of course, LispSyntax.jl still has some glaring omissions that someone should really get around to implementing.


If Julia succeeds in making Python community taking JIT compilers seriously that is already a big victory.


Is there a Lisp I can download, for Windows, that provides a linear algebra problem solving environment* out of the box or from a package manager? Ideally linked to a reasonably optimized BLAS implementation.

* this is intentionally under-specified, but to me it means some nontrivial subset of matlab/numpy functionality


Python + Hy + Numpy

In all seriousness that's probably the quickest way, if by 'lisp' you mean 's-expressions'.

For a real lisp, maybe these Numpy bindings[0] for Chez Scheme? Although the py-* names look a bit awkward. You could easily rebind those, but maybe that's not "out of the box" enough.

[0] https://github.com/guenchi/NumPy


I’m sure we can rig a way to call from Julia’s embedded lisp mode back into Julia for you. Then you’ll be all set :D


https://github.com/JuliaLang/julia/pull/17108 only implements the REPL part but it’s a start.

“Downloadable Windows LA REPL” was a criteria Julia met within ~6-8 months of public release, and it was one of the reasons I got involved even though I only ever use(d) windows at work (...mostly for scientific instrumentation which shipped with and only supported windows).

This is slowly changing, but IMO windows support is still table stakes for a scientific computing environment, and seeing windows support meant both that I could use it where I most needed it, and signaled that Julia had potential for real traction (not abandoning >70% of users out of the gate). I think windows support is also one of the reasons Python/NumPy got to be where it is — it wasn’t always perfect, but they at least cared about windows, more so than most other scripting languages in the mid-2000s. Ditto CMake.


>Lisp based contenders ready for scientific computing

Any examples?


cl-ana, Common Lisp data analysis library with emphasis on modularity and conceptual clarity - https://github.com/ghollisjr/cl-ana

numcl, a numpy clone in Common Lisp - https://github.com/numcl/numcl

Clojure kixi-stats- https://cljdoc.org/d/kixi/stats/0.5.0/doc/readme

and the book Clojure for Data Science - http://clojuredatascience.com/

Clojure's Incanter has been mentioned already - http://incanter.org/

OWL, the OCaml Scientific Computing project - https://ocaml.xyz/

And probably not quite what you're looking for, but there is a Racket data frame structure which provides some data science like manipulation capabilities - https://alex-hhh.github.io/2018/08/racket-data-frame.html

Appreciate some of the above are Schemes and not Lisps, but they may still be interesting.


>Any examples?

Common Lisp has been used in scientific computing since decades, so in some sense it's "ready". Also, see CLASP: lisp at the state of the art of computational chemistry.


Lush is the one I had looked at long ago,

http://lush.sourceforge.net/


Incanter


A femtolisp REPL is available from Julia by running:

    julia --lisp


Which is probably one of the reasons why Julia takes ages to do a simple plot.


No, that's not why. You think there's considerable time spent in parsing? It's all code generation that takes time, which is the same thing that makes Julia powerful.


That’s LLVM as a JIT compiler. In pure byte-code interpreter mode, Julia is quite more responsive.


There's also compilation that happens in Julia. Like the type inference. Which is to say, the time isn't just spent in LLVM.


Oh, goody, another tiny LISP (I've been keeping track of them at http://taoofmac.com/space/LISP, will add this one ASAP).


You should quote the README from this one:

"""

Almost everybody has their own lisp implementation. Some programmers' dogs and cats probably have their own lisp implementations as well. This is great, but too often I see people omit some of the obscure but critical features that make lisp uniquely wonderful. These include read macros like #. and backreferences, gensyms, and properly escaped symbol names. If you're going to waste everybody's time with yet another lisp, at least do it right damnit.

"""


Great idea, done!


Not that it matters much, but femtolisp is not a new project. The bulk of the code has been around for at least a decade.


Sure, but I hadn't come across it before (or if I did, I forgot), and I like to keep track of such things.


PicoLisp[0] is an odd, opinionated candidate for you. It's got backreferences [1]. Being purely interpreted, code generation is done a little bit differently [2].

[0] https://picolisp.com

[1] https://picolisp.com/wiki/?AtMark

[2] https://picolisp.com/wiki/?macros


Neat page, def. bookmarking this!

FYI the link to microscheme is broken, this works: https://ryansuchocki.github.io/microscheme/

Judging by the prominent "microscheme.org" at the top of that page, the link is supposed to work, but anyone's guess when it might be fixed...


Thanks, fixing it. Caching might mean it takes a bit to show up.


you may be interested in janet as well


Just found out about it. Added, thanks!


I like the "tiny" version that has a minimal lisp written in C and then builds upon that with this bit of magic:

https://github.com/JeffBezanson/femtolisp/blob/master/tiny/s...



"Bytecode is first-class, can be printed and read, and is human readable (the representation is a string of normal low-ASCII characters)."

Anyone know where I can read more about the VM and its human-readable syntax? I'm not immediately spotting it in the repo.


It's not really documented, but human readable, may also not really mean what you think it means. For example, here's a snippet from the printed bytecode of the default system image:

      #fn("8000r1e0~|[316>0\x7fM~|[i2i343;];" [closure?])] find-in-f)
      #fn(";000r2c0~|}q3c1tg66I0c2e3c4c5e6g63132c73241;c8;" 
    [#fn("8000r0c0c1~\x7fq2i2322^;" [#fn(for-each)
      #fn("8000r1~M|\x7f_43;" [])]) 
    #fn("6000r1|F16B02|Mc0<16802|\x84c1<680e2|41;c3|41;" [thrown-value
It's human readable in the sense that all the operations have single character ascii opcodes and strings are embedded as is and there's no raw pointers, etc, so you won't get mojibake, but it's not human readable in the sense of say a pretty-printed IR.


(edit: clarity)

i couldn't really find any docs about it either, but compiler.scm is pretty readable and has an instruction listing – their arguments aren't documented explicitly, but you can infer most of it from the code for `compiler::disassemble`, which is quite readable too.

the instructions remind me of CPython, pretty standard stack-machine stuff: slots for locals (`loadv 0` where 0 is an index to func_values), loading globals by name (`loadg 0`, same as with values but the value is a symbol to be looked up), relative jumps, things like that.

for no reason in particular i spent an evening[0] reverse-engineering enough of the code to be able to parse/display bytecode strings :) here's the python code [1] if you want to try it, though it'd probably be easier to just install femtolisp and use its `disassemble`...

as others have mentioned, the serialized form is kind-of readable, and with some practice you could probably even read it directly! e.g "c0" is `loadv 0`, ";" is `ret`, functions always have a header like `r3` indicating 3 arguments, etc. reading it like that would be very tedious but doable in a pinch, and at the very least you can see patterns like the ones i mentioned

i've only looked at the bytecode and vm, but if you were more serious about using the language, the `compiler` module seems to have a nice API for creating bytecode. (they just directly expose all the functions the compiler uses!)

---

[0] would've gone much faster if i'd known they shift each byte by 48 when serializing...

[1] https://gist.github.com/lubieowoce/148406b1dd61d91b1584339ff...


It looks like it's done here in the print routine: https://github.com/JeffBezanson/femtolisp/blob/master/print....

It adds 48 to each byte being printed, which is ASCII '0'.


The Readme for this repo is very well written. It makes me think that the rest of the project is worth checking out.


the "rest of the project", at this juncture in programming history anyways, turned out to be "the Julia programming Language".


Google seems to indicate that femtolisp is used for some portion of Julia's implementation, and didn't evolve into Julia itself?


Yes, but the author of femtolisp is one of the co-creators of and single most prolific contributor to Julia. Spiritually, that is the successor project.


I came to read about the lisp implementation, but I stayed for the "cutef8" pun among his other repos.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: