You should try skimming to the end, where there are two pages of references. Not to mention the fact that Cardelli is a prominent figure in the study of programming languages.
And I think you'll find that Gilad Bracha is a more informed critic of reliance on static type checking.
That conclusion made me laugh. In the early to mid 90s, I was working on 3D interactive software. The video drivers and chips had advanced so much in one year that all of the lovingly handcrafted assembly I had written over that time was useless. I thought to myself, "what an unstable industry to build a career in". I also made comparisons to accounting, medicine, and law, thinking perhaps I had made a wrong choice.
Turns out that adapting to rapid change is a critical aspect to this career choice. It is also a problem for businesses, which you start to grapple with as you move up the chain of command. The mistake is wistfully hoping that things would slow down enough for you rest on your hard-won knowledge, as in other fields. There is no competitive advantage in aging technology.
There can be a competitive advantage in aging technology. For instance, suppose you were writing an app that makes heavy use of scientific or ML libraries in python, but your UI is a mainly layout and forms with admin, login, accounts, and so forth. I could easily see huge advantages to keeping it in python, using Django, with a bit of CSS and javascript sparsely applied where needed.
Some of this comes down to personality, too. A lot of people talk about how people were abandoning ruby and rails once it was no longer the "new hotness". Plenty of us (who like rails) came to it somewhat reluctantly, only after the benefits were very clear and the initial chaos had started to subside. We, the reluctant ones, just tend to be a bit quieter about things. I think we're reluctant because we see greater danger and risk in a constantly evolving set of technologies, and we don't like spending a lot of time figuring out how to get a drop down list to populate, not when there's limited time and there's value to be added on the back end.
I've been at this for a while, and I'm getting more and more ok with sticking with an aging technology - even when I know the technology is becoming obsolete. Right now, it's pretty clear to me that web based UIs in the future will be more asynchronous and elaborate, but there will be a lot of false starts and dead ends as we get there. Remember spring mvc, spring di, iBatis, hibernate, tapestry, pico, tiles struts, struts 2...? Yes, the old request-response servlet with jdbc was unlikely to remain a choice for new development, but honestly, I think it might have been a good idea to stick with it for a while and stay out of that mess. Yeah, you'll need to change and adapt, but that doesn't mean you need to jump into the swirling mess.
...unless you do! There are, of course, remarkable opportunities when a new technology hits, and if you wait for it to settle down, you may often find that others have too great a lead. You just really need to figure out if that's you. Because otherwise, you may find that instead of improving the accuracy and speed of some critical calculations your users need to improve their supply chains, you instead spent all your time figuring out how to asynchronously update a drop down list depending on where they clicked on a map. And a round trip to the server and an extra second to load that drop down list might not have been all that big a deal compared to the actual business use of your app.
Although the OP seems to be asking for "engineering" not "programming" books, I'm going to second this. Benjamin Pierce's "Types and Programming Languages" will help you get down to what programming is really about. If you are not familiar with lambda calculus and its notation, it may be rough going at first. But lambda calculus is VERY simple, and Pierce takes you through it. The book progresses methodically to the concepts found in most common programming languages.
This is one of the few truly language agnostic books on programming. SICP is close, but it is limited in relevance at times due to the limitations of a particular language (Scheme).
As a long-time advocate of functional programming, I was once asked after a talk what is the "best" FP language to learn. Since I'm known as a Haskell/Scala/ML person, I surprised much of the audience by answering that I thought F# was the best entry point. What I did not say, however, was that I think that Windows is a challenging environment for anyone who is used to Linux/Mac -- and this includes the "open source" world in general. I think that Microsoft is a leader in language design, but their platform relevance was slipping for many years. I believe it may be on the upswing (with Xamarin etc). I would encourage you to stick with F# as you will be further along in not only understanding FP, but other modern programming languages, including ones that haven't been invented yet.
>I would encourage you to stick with F# as you will be further along in not only understanding FP, but other modern programming languages, including ones that haven't been invented yet.
Interesting. Why do you say "including languages that haven't been invented yet"? Does F# have some features that are modern / good, but not common in some other current languages, but are likely to be there in future ones?
> I would encourage you to stick with F# as you will be further along in not only understanding FP, but other modern programming languages
Thank you for your kind words!
As I mentioned elsewhere, I have also tried Haskell, but I never managed to move past some basic examples. With F#, at least for now, it seems like I can more easily read/understand even more complex stuff (and one day maybe even write it by myself).
To me, Windows has eclipsed both MacOS and Linux as the best dev environment. It's caught up in shell and the whole ecosystem of tools completely blows away what exists on MacOS and Linux.
Windows is also a lot better than the other two in keyboard shortcut support.
He keeps warning not to write thousands of lines before talking to customers (as you often hear from biased survivors). But it sounds like they didn't find PM fit by talking to customers. Rather they identified -- perhaps out of desperation and exhaustion -- 0.1% of the code they had written that they thought might be useful and threw it out there. The takeaway really seems to be write as much code as you can, if one-tenth of one percent is going to be the golden nugget. Also, I suspect that they were able to use that other code and the processes around it to capitalize on their good fortune.
Speaking from personal experience ... At the time of signing you can see the upside (the offer), but you can't know the downside, which can be quite significant. It's a poor trade-off. Avoid these unless you get some kind of severance for the period of the agreement. Mere employment as "consideration" is a bad deal.
Here is an essay I wrote following my son's first technical talk, weeks after his seventh birthday. It references what I believe works in teaching children to program.
Not at all high-brow, but I revisit the in-the-trenches case study of "Scaling Pinterest" on Infoq from time to time because I find their fighting through the pain inspirational for my own scaling troubles.
And I think you'll find that Gilad Bracha is a more informed critic of reliance on static type checking.