Hacker Timesnew | past | comments | ask | show | jobs | submit | johnecheck's commentslogin

Seems like the solution is to either not incriminate yourself online or get plausible deniability with a pseudonym.

AGAIN, we are way past SIMPLE SOLUTIONS like this. We have enough data and information to be able to see the potential for harm that we can mitigate through smart policy without falling back on this simple argument.

Personally, I really like that my feeds aren't getting that level of granular detail. I prefer the explicit control I have with 'Show more like this' and 'Show less like this'.

I generally think that. But letting dwell time/clicks/open-rates expand the recommender and then (bound to swipe) 'disinterested'/'show less like this' to cull has been pretty efficient. I used to feel dumped into simclusters and now I see a more specific subset of posts I prefer (while still casting what feels like a wide net).

I really liked when bsky introduced the 'show more/less' and then expanded it to custom feeds. But I'm afraid the recommender systems work better with more data. And I think the feed operator alone gets sent a limited set of interactions?

I'm not exactly sure how it would work in atproto but I could imagine an enriched 'graph-interactivity' where you can turn on and off which/how much signal/privacy you want.


You're describing Liquid Democracy[1]. Seems challenging to implement but definitely an interesting idea.

[1] https://en.wikipedia.org/wiki/Liquid_democracy


I didn't know this had a name, thank you so much!


Trust is subjective! Let's establish trust in each other rather than rely on one-size-fits-all solutions.

Personally, I trust my friends, family, and some public figures and institutions to varying degrees. I want to see social experiences that reflect that.


It's X, not twitter. That system is directly controlled by a single man for his own benefit; we should use the name that reminds us of that.

He openly promotes himself and those that pay him. If you think Musk doesn't have an admin dashboard where he can demote accounts he dislikes and promote his friends, I have some... unkind words for you.

It's about control. Control over your information intake is partial control over you.

We can do so much better than ceding that power to the highest bidder.


We're not sinister, we just have these metrics that we prioritize at all costs, up to and including child well-being!


> There are trust issues all the way down.

Nonetheless, we must somehow build trust in others and denounce the undeserving. Some humans deserve trust. Will these AI models?


That's unbelievably stupid. But then again, so is not renting out the space at the market rate so you can pretend it's worth more than it really is. So... good luck, I guess?


that's the joke.gif


> Even if the conclusion is broadly correct, that doesn't mean the reasoning used to get there is consistent.

This is the conclusion of a reply that focused entirely on critiquing OP's style/AI use instead of their reasoning? Ironic.


Poe's Law strikes again!


If someone generates a ten thousand word slop essay with AI, do I have a moral obligation to critique its reasoning step by step instead of merely pointing out its origin?

If I do, it just so happens I have a ten thousand word rebuttal for you…


If someone generates a massive essay with AI, you don't have a moral obligation to do anything.

If you want to argue that the reasoning in the essay is incorrect, I do expect you to refute the reasoning instead of merely attacking the source.


I strongly suspect it's the latter, though someone please chime in if I'm wrong.

Even so, this is a real advancement. It's impressive to see existing techniques combined to meaningfully improve on SOTA image generation.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: