Hacker Timesnew | past | comments | ask | show | jobs | submit | alethic's commentslogin

Recently, some 9front developers have picked up femtolisp, and are hacking it into something for their own use. https://sr.ht/~ft/StreetLISP/

I believe its adoption was motivated by needing to write/generate an OTF parser.


Yes, I have had the same experience with the specification. It really is quite difficult to follow :c

Their SpecTec system is fancy and neat but I don't think that auto-generated specifications produce something worth reading. Perhaps in the future when there's less churn, there might be a hand-written specification? In the mean time I've needed to jump into their Discord to ask clarification questions about the high-level stuff. Once understanding that and the grammar conventions and the like, the specification becomes much more readable, though still not great.

Certainly nothing like an RFC. But maybe I have too high standards...


(It doesn't help that the syntax is *weird*. You've got your choice of an S-expression Scheme syntax or a stack-oriented ML syntax, *and* you can use both together. And there's at least one undocumented de facto syntax floating around AFAIK, though I believe the standard merged support for the main features it was used for, so hopefully test suites and the like will switch away from it at some point.)


No one else has tried implementing the RCS standard.

There just aren't any open-source Android libraries for RCS out there, much less anything in AOSP.

https://github.com/search?q=rcs+android&type=repositories


They are similar, but effect handlers are more powerful and more amenable to typing.

https://lobste.rs/s/q8lz7a/what_s_condition_system_why_do_yo...


The checked exceptions analogy is a good one. Thinking of effect handlers as resumable checked exceptions with some syntactic sugar is very accurate. For someone with a Haskell background, thinking about them as "dependency injection" is also helpful (and these notions are equivalent!) but for the Java heads out there, yeah, resumable checked exceptions provides a really good mental model for what effect handlers are doing with the call stack in the general case.


What’s the difference between a resumable checked exception and a function call?


Function call always returns, and to one single caller, whereas effects can choose not to "return" at all, resume multiple times, etc


Right, though the former is just an exception. So what general effect systems provide above and beyond what we already have in most languages is "multiply-resumable" checked exceptions (also known as multi-shot continuations and often provided by "delimited continuations").

At the time I developed my Haskell effect system Bluefin there was a conventional wisdom that "you can't implement coroutines without delimited continuations". That's not true: you can implement coroutines simply as function calls, and that's what Bluefin does.

(The story is not quite as simple as that, because in order for coroutines to communicate you need to be able to pass control between threads with their own stack, but you still don't need multi-shot continuation.)


Good point! You might be interested in reading this article on the topic: https://without.boats/blog/coroutines-and-effects/


Thanks, I did find that interesting. I would say Bluefin is another entry in the static/lexical row, whereas its cousin effectful is in the static/dynamic row (although this may be a slightly different interpretation of the terms than is used in the article).


It's similar on the surface. Another language, Effekt, does actually use interfaces for their effect declarations rather than having a separate `eff` declaration.

The difference comes in their use. There's two things of note. First, the implementation of an interface is static. It's known at compile time. For any given concrete type, there is at most one implementation of MovieApi. You're using the interface, then, to be generic over some number of concrete types, by way of only specifying what you need. Effect handlers aren't like this. Effect handlers can have many implementations, actually. This is useful in the case of ex. adding logging, or writing tests to simulate I/O without actually doing it, or just having different behavior at different places across the program / call stack...

    eff MovieApi {
      def getPopularMovies();
    }
    def main() {
      run {
        println("Alice's movies: ", getPopularMovies());
      } with handler MovieApi {
        def getPopularMovies() = [
          "Dr. Strangelove", 
          "Lawrence of Arabia",
          "The End of Evangelion",
          "I Saw the TV Glow"
        ];
      }
      run {
        println("Bob's movies: ", getPopularMovies());
      } with handler MovieApi {
        def getPopularMovies() = [
          "The Magic School Bus: Space Adventures",
          "Spy Kids 3-D: Game Over",
          "Twilight: Breaking Dawn: Part II"
        ];
      }
    }
Second, the effects of effect handlers are not functions. They're under no obligation to "return", and in fact, in many of the interesting cases they don't. The `resume` construct mentioned in the article is a very special construct: it is taking the "continuation" of the program at the place where an effect was performed and providing it to the handler for use. The invocation of resume(5) with a value looks much like a return(5), yes. But: a call to resume 1) doesn't have to happen and the program can instead continue after the handler i.e. in the case of an effectful exception, 2) doesn't have to be invoked and the call to resume can instead be packaged up and thunkified and saved to happen later, and 3) doesn't have to happen just once and can be invoked multiple times to implement fancy backtracking stuff. Though this last one is a gimmick and comes at the cost of performance (can't do the fast call stack memcpy you could do otherwise).

So to answer your question more briefly, effects differ from interfaces by providing 1) a decoupling of implementation from use and 2) the ability to encompass non-local control flow. This makes them not really compete with interfaces/classes even though the syntax may look similar. You'd want them both, and most effectful languages have them both.


ORC/ARC are a reference counting garbage collector. There's a bit of a terminological clash out there as to whether "garbage collection" includes reference counting (it's common for it to not, despite reference counting... being a runtime system that collects garbage). Regardless: what makes ORC/ARC interesting is that it optimizes away some/most counts statically, by looking for linear usage and eliding counts accordingly. This is the same approach taken by the Perseus system in use in some Microsoft languages like Koka and Lean, but came a little earlier, and doesn't do the whole "memory reuse" thing the Perseus system does.

So for ergonomics: reference counting is not a complete system. It's memory safe, but it can't handle reference cycles really very well -- since if two objects retain a reference to each other there'll always be a reference to the both of them and they'll never be freed, even if nothing else depends on them. The usual way to handle this is to ship a "cycle breaker" -- a mini-tracing collector -- alongside your reference counting system, which while is a little nondeterministic works very reasonably well.

But it's a little nondeterministic. Garbage collectors that trace references, and especially tracing systems with the fast heap ("nursery" or "minor heap") / slow heap ("major heap") generational distinction are really good. There's a reason tracing collectors are used among most languages -- ORC/ARC and similar systems have put reference counting back in close competition with tracing, but it's still somewhat slower. Reference counting offers one alternative, though -- the performance is deterministic. You have particular points in the code where destructors are injected, sometimes without a reference check (if the ORC/ARC optimization is good) and sometimes with a reference check, but you know your program will deallocate only at those points. This isn't the case for tracing GCs, where the garbage collector is more along the lines of a totally separate program that barges in and performs collections whenever it so desires. Reference counting offers an advantage here. (Also in interop.)

So, while you do need a cycle breaker to not potentially leak memory, Nim tries to get it to do as little as possible. One of these tools they provide to the user is the .acyclic pragma. If you have a data structure that looks like it could be cyclic but you know is not cyclic -- for example, a tree -- you can annotate it with the .acyclic pragma to tell the compiler not to worry about it. The compiler has its own (straightforward) heuristics, too, and so if you don't have any cyclic data in your program and let the compiler know that... it just won't include the cycle collector altogether, leaving you with a program with predictable memory patterns and behavior.

What these .cyclic annotations will do in Nim 3.0, reading the design documentation, is replace the .acyclic annotations. The compiler will assume all data is acyclic, and only include the cycle breaker if the user tells it to by annotating some cyclic data structure as such. This means if the user messes up they'll get memory leaks, but in the usual case they'll get access to this predictable performance. Seems like a good tradeoff for the target audience of Nim and seems like a reasonable worst-case -- memory leaks sure aren't the same thing as memory unsafety and I'm interested to see design decisions that strike a balance between burden on the programmer vs. burden on performance, w/o being terribly unsafe in the C or C++ fashion.


The short answer is you'd write your code the same, then add .cyclic annotations on cyclic data structures.

("The same" being a bit relative, here. Nim's sum types are quite a bit worse than those of an ML. Better than Go's, at least.)


This isn't correct. Mastodon merged fetch-all-replies in March. https://github.com/mastodon/mastodon/pull/32615

The only difference in visible replies is in the moderation choices of the server the post is viewed from.


Yes but it's off by default for a reason, it's not realistic to expect every node to fetch all messages.

This is a catch 22 because the reason fedi is more decentralized is because of the low barrier of entry to run a node, but at the same time that low barrier means it takes less resources because it does not fetch every single message and piece of media.


It's currently on on the flagship instance and will be on by default in the upcoming 4.5 release.


Creating more monolithic centralization.

Us small instances can't afford it.


How does it work?

In ATProto, there is no need to do this on-demand because the data is already there in the AppView. When you want to serve a page of replies, you read them from the database and serve them. There is no distributed fetching involved, no need to hit someone else's servers, no need to coalesce them or worry about limiting fetches, etc. This is why it works fine for threads without thousands of replies and hundreds of nesting levels. It can also be paginated on the server.

If you don't have this information on your server, how can you gracefully fetch thousands of replies from different servers and present a cohesive picture during a single request? I'm sure this PR does an attempt at that but I'm not sure this is a direct comparison because Mastodon can't avoid doing this on-demand. If we're comparing, it would be good to list the tradeoffs of Mastodon's implementation (and how it scales to deep threads) more explicitly.


There is a detailed explanation available at the link I posted. Second header, "Approach".


What do you expect the performance characteristics to be compared to querying a database?


I expect them to be unimportant. This has been merged upstream and running on the flagship Mastodon instance for a little while now.

There is also a section related to performance available at the link I posted. Third header, "Likely Concerns", second subheader, "DoS/Amplification".


What do you mean by unimportant?

I mean from the user's perspective: when I open a thread, I expect to instantly see the entire discussion happening across the entire network, with the paginated data coming back in a single roundtrip. Moreover, I expect every actor participating in the said discussion (wherever their data is stored) to see the same discussion as I do, with the same level of being "filled in", and in real time (each reply should immediately appear for each participant). It should be indistinguishable from UX of a centralized service where things happen instantly and are presented deterministically and universally (setting aside that centralized services abandoned these ideals in favor of personalization).

With ATProto, this is clearly achieved (by reading already indexed information from the database). How can you achieve this expectation in an architecture where there's no single source of truth and you have to query different sources for different pieces on demand in a worker? (To clarify, I did read the linked PR. I'm asking you because it seems obviously unachievable to me, so I'm hoping you'll acknowledge this isn't a 1:1 comparison in terms of user experience.)

To give a concrete example: is this really saying that replies will only be refreshed once in fifteen minutes[1]? The user expectation from centralized services is at most a few seconds.

[1]: https://github.com/mastodon/mastodon/pull/32615/files#diff-6...


I'm not very interested in arguing over the ins and outs of "user expectations" and Mastodon vs. Bluesky, sorry. I would suggest you try it yourself and come to your own conclusion about whether this is a usable system :^)


I'm arguing that "not really consistent" from the grandparent post still applies, and therefore your "this isn't correct" isn't correct.

For realtime discussions (like this one), I don't think we can call it consistent if it takes multiple minutes for each back-and-forth reply to propagate across instances in the best case (and potentially longer through multiple hops?) because you'll see different things depending on where you're looking and at which point in time.


In practice, this is rarely an issue due to the nature of human attention. Beyond a couple dozen speakers in a conversation, it's noise.

At least to my observation; I haven't pulled apart the protocol to know why: if you're in a conversation in Mastodon it's real good about keeping you in it. The threading of posts seems to route them properly to the host servers the conversing accounts live on.


And yet I could have a realtime public threaded conversation on Twitter, and am having one on Bluesky (regardless of which PDSs or AppViews other people are using), but cannot in principle have on Mastodon (unless everyone I talk to shares my instance). Does this say anything about relative ability of ATProto vs ActivityPub to meaningfully compete with centralized services?

I hear your point that slower conversation can be better. That’s a product decision though. Would you intentionally slow down HN so that our comments don’t appear immediately? You could certainly justify it as a product decision but there’s a fine line between saying you should be able to make such decisions in your product, and your technology forcing you to make such decisions due to its inability to provide a distributed-but-global-and-realtime view of the network.


I'm not sure where you reached the conclusion that you can't have a realtime public threaded conversation on Mastodon; I do it frequently. The way it generally works is that clients will auto-at-tag people in the conversation, which makes sure the message is routed to all in the conversation within more-or-less milliseconds.

Auto-at-tagging doesn't scale to dozens and dozens of actively-engaged speakers, but neither does human attention, so that's not a problem that needs to be solved.


I realize that we might be arguing over definitions here, but to me part of the experience of Twitter-like conversation is seeing other replies appear in real time even when they’re not directed at me — same as how you’ve noticed this thread on HN.

Seeing the existing convo in real time lets me decide which points to engage with and which have been explored, and to navigate between branches as they evolve in real time (some of which my friends participate in). I do earnestly navigate hundreds of times within an active thread — maybe it’s not your usage pattern but some of us do enjoy a realtime conversation with dozens of people (or at least observing one). There’s also something to the fact that I know others will observe the same consistent conversation state at the time I’m observing it.

You might not consider such an experience important to a product you’re designing, but you’re clearly taking a technological limitation and inventing a product justification to it. If Mastodon didn’t already have this peculiarity, you wouldn’t be discussing it since replies appearing in realtime would just seem normal.

In either case, whether you see it as a problem to be solved or not, it is a meaningful difference in the experiences of Twitter, Bluesky, and Mastodon — with both Twitter and Bluesky delivering it.


(update: it turns out fetching all the discussion posts is now supported in Mastodon v4.4. https://mastodon.exitmusic.world/@james/115147206129637513)


Oh, that's supported (though the UX is not really ideal): if they're not on the same server as you, you can navigate to the post on its host server and you'll see all replies there. To join the conversation, you can hit reply on one of the posts and you'll get a UI popup to route you back to your own server to respond from there.

It's definitely not as clean as a centralized solution though.


This kind of UX is the main reason I personally dont use Mastodon. It's just not intuitive


Good to know. Thanks for the update!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: