You mean, how much you can develop during a PhD on scientific computing with a known technical computing syntax and paradigm on top of a modular compiler infrastructure with an enthusiastic community?
(Not to detract from Julia devs but there were plenty of Lisp based contenders ready for scientific computing before so its useful to ask why Julia why now etc )
*edit: this was kinda snarky but I actually meant a compliment on the excellent strategic choices Julia devs made as compared to other scientific computing attempts
I'll go and say that, outside of the great work from the creators in the development and marketing of the language, the main difference between those Lisp based languages and Julia in terms of attracting people is the syntax. Julia focuses on being approachable to Python, R, Fortran and Matlab programmers (since those already "won" by the time Julia was created), and it wouldn't be able to convince any significant number of those to even check the language without looking like Python, R, Fortran and Matlab (even when S-expr is indeed more elegant and powerful). Julia base syntax is closer to how people write math, so it's more familiar even to people who are not dedicated programmers which were part of Julia's target audience.
And when people discover the lispiness in Julia which is a thin layer below the surface syntax (such as the metaprogramming system and multiple dispatch) they at least already gave the language a fairer chance. Other languages that borrowed from Lisp but didn't go, to different degrees, with s-expr like Scala and Clojure (plus R) also had success in the data science/ML area. Hopefully Nim also gets a foothold, and possibly Racket with optional infix syntax.
> Julia base syntax is closer to how people write math
As someone who is a Julia fan, but has a background from mathematics rather than computer science, I don't agree. Julia syntax resembles mathematics only on a very superficial typographic level that have little actual significance when reading or writing code.
Mathematical notation, unlike code, is fundamentally two-dimensional, with vertically written fractions, subscripts and superscripts, etc. Simple mathematical expressions are easy to read in both Julia and S-expression notation. More complicated expressions are hard to read in Julia (I sometimes have to write them out by hand on paper in order to understand them), while the same expressions written in properly indented S-expression syntax with a generous use of newlines are easy to read. They are somehow structurally more similar to standard 2-dimensional mathematical notation, despite the superficial infix vs prefix differences.
What attracted me in the first place to Common Lisp (my first programming language) as a person with no technical skills or experience was the friendly and intuitive syntax. It was the first programming language I saw that didn't look visually intimidating. These days I use Julia, but I use it for the awesome features, libraries and community, not for the syntax. I would love Julia even more if it used S-expressions.
The superficial typographic level is the difference, I was mostly talking about first impressions. Though you probably shouldn't really allow your expression to get to the point that it's hard to read in Julia (for example abusing broadcast to create overly complex one-liners). At this point you should probably break it in multiple variables and functions.
And as Julia gets more mature and popular, I wouldn't be surprised if more people notice the strength of it's ecosystem and start creating new language that targets it, just like Elixir and Erlang or all JVM machines (and something more than just reader macros like LispSyntax.jl, like having full editor support and exclusive features that makes it unique compared to Julia). A JuliaLisp could have an amazing interop with the main language, and serve as an alternative for those who prefer s-expr.
With MATLAB/Octave in particular I think the value isn't familiar syntax or "how people write math" (because even that is quite varied). It's more that the REPL and language are fast enough to make it a CLI calculator that allow for rapid prototyping and development/exploration of mathematical ideas.
It's one thing that Julia currently does "ok" with, the syntax is fine (albeit slightly more verbose and often more abstract), but the iteration speed gets fairly nasty when you're not doing big computations or need to rapidly iterate, in which the JIT can take awhile to warm up. There are things you can do, but it's still a lot slower for day to day work for a lot of MATLAB users.
At least in my opinion. I tried to switch to Julia from GNU Octave (from which I moved from MATLAB proper) for a lot of hobby controls/DSP work and it was just a pain to iterate and explore ideas with. Same with Python imo as well.
The trick to avoiding this problem is to keep Julia "warm" with a long running process. There are a few ways to do this - Jupyter, the Juno IDE, and vim/neovim with a terminal pane are all popular - but the most straightforward is to switch from `julia file.jl` to
```
julia
julia> include("file.jl") # when you want to reload, ctrl-c then arrow-up and enter to include again
```
This isn't so dissimilar to MATLAB in some respects: you don't restart MATLAB every time you want to re-run your code.
And people will do all the hard work of building a new language from scratch but conspicuously leave that feature out instead of using one of the many very good existing lisp or scheme implementations. Because the reality is that most programmers don’t want to code in s-exprs despite some significant benefits.
This is a very hard truth for the more aspbergery wing of lisp users to accept.
Clojure was a new language from scratch and uses s-expression-like syntax.
Generally, s-expressions is not a particular popular or needed tool. We've seen similar systems for code data&representations though - for example XML was used to represent code, too.
I agree with you that s-expr is a more powerful syntax (more extensible with less rules), but the social aspect of the syntax is just as important since programming languages target humans, which are just not rational beings. Over the decades many languages "won" against the Lisp family, including a much less mature Python, partially for aesthetic reasons. Even Math could have used a full prefix syntax and have less rules (no need to deal with operator precedence) but people still kept the infix syntax. And Julia, as a math and science focused language, having a syntax that better emulates the tool it covers is objectively a positive point.
And secondly, Lisp programmers are also human, and more power means more responsibility. When you have a language to create any language, people are tempted to do it, but a language is also a social construct, not just technical (a language only has value if many people talk it, which means documentation, community, support...), and making it so easy to make your own that you end up ignoring that aspect is the Lisp Curse. Modern languages actually see value in limiting the power, by either removing it completely (like Go), or hiding it/making deviations explicit (like Julia's surface syntax and macro identifier '@' which alerts the user that he is deviating from the "first class" language that everyone is supposed to know).
Agreed. Not all languages are powerful enough to offer a lisp mode that'd be sufficiently lispy though. The whole point of LispSyntax.jl is that it's lisp syntax while still respecting julia semantics, which are basically just lisp semantics.
Of course, LispSyntax.jl still has some glaring omissions that someone should really get around to implementing.
Is there a Lisp I can download,
for Windows, that provides a linear algebra problem solving environment* out of the box or from a package manager? Ideally linked to a reasonably optimized BLAS implementation.
* this is intentionally under-specified, but to me it means some nontrivial subset of matlab/numpy functionality
In all seriousness that's probably the quickest way, if by 'lisp' you mean 's-expressions'.
For a real lisp, maybe these Numpy bindings[0] for Chez Scheme? Although the py-* names look a bit awkward. You could easily rebind those, but maybe that's not "out of the box" enough.
“Downloadable Windows LA REPL” was a criteria Julia met within ~6-8 months of public release, and it was one of the reasons I got involved even though I only ever use(d) windows at work (...mostly for scientific instrumentation which shipped with and only supported windows).
This is slowly changing, but IMO windows support is still table stakes for a scientific computing environment, and seeing windows support meant both that I could use it where I most needed it, and signaled that Julia had potential for real traction (not abandoning >70% of users out of the gate). I think windows support is also one of the reasons Python/NumPy got to be where it is — it wasn’t always perfect, but they at least cared about windows, more so than most other scripting languages in the mid-2000s. Ditto CMake.
Common Lisp has been used in scientific computing since decades, so in some sense it's "ready". Also, see CLASP: lisp at the state of the art of computational chemistry.
No, that's not why. You think there's considerable time spent in parsing? It's all code generation that takes time, which is the same thing that makes Julia powerful.
Almost everybody has their own lisp implementation. Some programmers' dogs and cats probably have their own lisp implementations as well. This is great, but too often I see people omit some of the obscure but critical features that make lisp uniquely wonderful. These include read macros like #. and backreferences, gensyms, and properly escaped symbol names. If you're going to waste everybody's time with yet another lisp, at least do it right damnit.
PicoLisp[0] is an odd, opinionated candidate for you. It's got backreferences [1]. Being purely interpreted, code generation is done a little bit differently [2].
It's not really documented, but human readable, may also not really mean what you think it means. For example, here's a snippet from the printed bytecode of the default system image:
It's human readable in the sense that all the operations have single character ascii opcodes and strings are embedded as is and there's no raw pointers, etc, so you won't get mojibake, but it's not human readable in the sense of say a pretty-printed IR.
i couldn't really find any docs about it either, but compiler.scm is pretty readable and has an instruction listing – their arguments aren't documented explicitly, but you can infer most of it from the code for `compiler::disassemble`, which is quite readable too.
the instructions remind me of CPython, pretty standard stack-machine stuff: slots for locals (`loadv 0` where 0 is an index to func_values), loading globals by name (`loadg 0`, same as with values but the value is a symbol to be looked up), relative jumps, things like that.
for no reason in particular i spent an evening[0] reverse-engineering enough of the code to be able to parse/display bytecode strings :) here's the python code [1] if you want to try it, though it'd probably be easier to just install femtolisp and use its `disassemble`...
as others have mentioned, the serialized form is kind-of readable, and with some practice you could probably even read it directly! e.g "c0" is `loadv 0`, ";" is `ret`, functions always have a header like `r3` indicating 3 arguments, etc. reading it like that would be very tedious but doable in a pinch, and at the very least you can see patterns like the ones i mentioned
i've only looked at the bytecode and vm, but if you were more serious about using the language, the `compiler` module seems to have a nice API for creating bytecode. (they just directly expose all the functions the compiler uses!)
---
[0] would've gone much faster if i'd known they shift each byte by 48 when serializing...
Yes, but the author of femtolisp is one of the co-creators of and single most prolific contributor to Julia. Spiritually, that is the successor project.