Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

Wasn't aware there are ~2k relays now. Have inter-relay sharing situation improved?

When I tried it long time ago, the idea was just a transposed Mastodon model that the client would just multi-post to dozen different servers(relays) automatically to be hopeful that the post would be available in at least one shared relays between the user and their followers. That didn't seem to scale well.



Getting clients to do the right thing is like herding cats, but there has been some progress. Early 2023 Mike Dilger came up with the "gossip model" (renamed "outbox model" for obvious reasons). Here's my write-up: https://habla.news/hodlbod/8YjqXm4SKY-TauwjOfLXS

The basic idea is that for microblogging use cases users advertise which relays their content is stored on, which clients follow (this implies that there are less-decentralized indexes that hold these pointers, but it does help distribute content to aligned relays instead of blast content everywhere).

Also, relays aside, one key difference vs ActivityPub is that no third party owns your identity, which means you can move from one relay to another freely, which is not true on Mastodon.


Thanks! Not to be critical - more like thinking out loud - and don't have solutions to following myself - but that sounds like it could 1) affect negatively to power concentrating into the top popular relays -> potentially leading to same kind of speech issues as semi-centralized ActivityPub, and 2) it won't solve need to maintain multiple firehose connections.

I've been wondering if the multi-firehose architecture is really where decentralized censorship resistant microblogging should be the way forward; I remember the Windows Mobile clients for 2ch.net(today 5ch.io) that would scrape thread deltas from bunch of subdomains under it was plenty fast on 128k(advertised) connection to get thousands of posts in late 2000s, and so I think an RSS style of systems getting delta updates from multiple domains could work without having to do the insanity of early Nostr, or massive liabilities for instance operators with Mastodon, especially if those multiple domains could be set up with relative ease.

Yeah, I don't exactly understand why you have to sign up every time to Mastodon servers and server operators to have to be responsible about users. It worked when it was urgently needed, which was brilliant, but the ID system had under baked spots.


Yeah, any time you need either an index or a caching layer you have to re-centralize one way or another. But decoupling those "services" from the data storage itself helps, and credible exit makes the gatekeepers far less powerful. An example: a few weeks ago nostr.band, one of nostr's main indexers/search services went away. Search is still somewhat impacted (evidence that we were centralized around it), but indexing (i.e. finding users' relay lists) is still covered by several other services.


A difference with Mastodon is your account is independent of any relay.

> scale well

It is up and it is growing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: