I think a lot of people are choosing to ignore that a lot of companies have done things in the past that were not illegal at the time of action. However, those actions were later decided to be made illegal because the behavior was deemed to be antithetical to our values.
For example, Standard Oil did not break any laws in its ruthless consolidation of the nascent oil industry. In fact, it exploited the law to allow it to grow into the monstrosity that it eventually became. In response, Congress passed the Sherman Antitrust Act in 1890 which subsequently prevented the actions that Standard Oil had used to consolidate the market.
There should be no question, that what FB is doing here, while not illegal, is highly dubious ethically.
I really appreciate this point. I often see it as written rules (laws) and unwritten rules (ethics). If something breaks the unwritten rules we have about how people are supposed to interact with each other, then we often codify that rule into law. Many people will say "I didn't break the law" but where many people would say that person did break an unwritten law.
> There should be no question, that what FB is doing here, while not illegal, is highly dubious ethically.
At the same time, I believe some of the stuff FB has done is currently illegal, such as this example in one of the whistleblower's disclosures to the SEC [0]:
> Our anonymous client is disclosing original evidence showing that Facebook, Inc. (NASDAQ: FB) has, for years past and ongoing, violated U.S. security laws by making material misrepresentations and omissions in statements to investors and prospective investors, including, inter alia, through filings with the SEC, testimony to Congress, online statements, and media stories.
So it could be a combination of them both violating ethics and violating the law.
Plato put it like this in his "Laws": “Laws are made to instruct the good, and in the hope that there may be no need of them; also to control the bad, whose hardness of heart will not be hindered from crime.”
I wonder how he’d speak of a supposedly even better educated society stealing the future of the next due to circular validation of our waste filled industrialism.
Given that most Greeks of that era believed they lived after the decline of a golden age, I suspect he might be be more understanding than most people today.
> I often see it as written rules (laws) and unwritten rules (ethics).
I think this is a very dangerous line to walk. A common phrase in law is "the law often allows what honor forbids" and that is because there is a difference between the law and ethics and IMO that is a good thing.
Is it ethical to eat all the cookies in the cookie jar and leave none for anyone else? No. Should it be illegal? No.
> that is because there is a difference between the law and ethics and IMO that is a good thing.
I feel confused because I didn't think I was saying that we should get rid of ethics by codifying everything into law. I thought my main point was that sometimes when people continue to break those unwritten rules, we will decide to write down those rules and enforce more strict consequences—not saying we should always do that.
I actually would prefer if we had fewer written laws and relied more on unwritten rules of interacting with each other. And when someone breaks those unwritten rules, we would apologize and forgive and work together to resolve the conflict and rebuild trust in each other, instead of pushing to punish people in a retributive system of justice.
Let's say that if there are two or more cookies in the jar every morning I add another one to it - under that scenario (especially if we go so far as to say cookies reproduce at some fixed proportion) then yea - it's totally illegal to eat all the cookies. The most common example of this tragedy of the commons is fishing but it happens all over the place.
Specifically on the topic of cookies - it honestly is "forbidden" in a lot of households to eat all the cookies in the jar. At work you'll probably face some consequences if there's a communal cookie jar (or, the more common scenario, drinking all the half-n-half and not getting more). We don't really have "public" cookie jars so this scenario is pretty contrived, but if there was one (i.e. if NYC installed a big cookie jar in Time Square for Halloween) then it probably would actually be illegal (or at least, against a city ordinance) to eat more than X cookies. But, like I said, it's pretty contrived feeling.
Illegal isn’t just another word for “forbidden” and the Law isn’t just another word for “the rules”, particularly in the context of this larger discussion on Facebook’s actions.
Your parents can punish you for eating all the cookies but the government will not sanction you nor them for doing so (even if a hypothetical and tyrannical government in theory could; maybe let’s not theorycraft absurd ordinances).
The issue for me with this scenario is that it's already so contrived that yea - of course you're not going to be arrested for eating all the cookies from the cookie jar. But only for that action - if you scale that action up to a significant level then you absolutely might be breaking a law. There are lots of things that are legal (or at least looked the other way at) on certain small scales and a lot of penalties escalate with the severity of the offense. If the US had a $1B jar of cookies and you ate all of them (aside from now being diabetic) the government would definitely pursue you for theft.
I suppose that's the truth of the matter - it actually might be illegal to eat all the cookies in the cookie jar (assuming you're not given express permission to do so) it's just that nobody cares because of the scale of the action. It's also usually a domestic affair - but just because you live in the same house as someone else you don't have a right to all their things (it's just that usually all the people are in a family and then property laws get a bit weird).
If your roomate in college had some super rare cookies valued at $10k and you ate all of them then they'd definitely be able to take you to court. It's just that nobody cares if an oreo goes missing.
Alright, so the thing about the cookies and the cookie jar up in the original post: for the point the parent was trying to make, it was not actually contrived and the reason for that is because you can replace that specific example with about a billion other minor slightly unethical examples that also shouldn’t be illegal and his point still stands.
Where it gets contrived is when you start talking about hypothetical socialist cookie jars policed by tyrannical municipal governments or collector’s cookies. You steal $10K worth of cookie products off the shelf, that’s robbery. You take a single pack of cookies off the shelf: that’s petty theft, and if you eat all the cookies your mom or roommate made and leave none for anyone else, then you’re just an asshole. Strangely enough the law is quite capable of making distinctions.
What the law is not capable of doing though is giving people without a moral compass a moral compass, or aligning differing moral compasses from different cultures. For all the nuances the law can make, it’s still just a sledgehammer in the face of human behavior and social custom is how we self-govern ourselves the vast majority of the time without involving sheriffs and courts and legislators. Every time the law takes something governed by social convention and puts it into the hands of the courts, private society loses a little part of itself to the people with the bigger guns for good or for ill.
We've slowly seen an alternate interpretation promulgated by many: anything that is not illegal is ethical. The endpoint is practically the same (anything legal is ethical and vice versa) but it arguably makes for a worse society.
If we are trapped somewhere with no other food than the cookie jar, the we will see how long before eating all the cookies is illegal.
Justice is a messy concept because is rooted in specific circumstances, and it’s absurd to think there’s a clear line between what’s unethical and what’s illegal.
The point is that being forbidden and being illegal are different ideas. It’s bad for society to codify too much behaviour in law. Knowing the law is no substitute for knowing the difference between right and wrong.
Regulating Facebook is a great example. Congress could easily react to facebook’s indiscretions by passing new laws here which stifle innovation.
I think I got that part. I was referring to the book the tragedy of the commons (very interesting small book that I recommend) which basically says that when you have N users of some common, even if game theory says that it's in their best individual best interest to protect the common as it is the strategy that maximise satisfaction, if N becomes large enough, someone will start damageing it and soon everyone will do the same. So the tragedy is that you actually have to enforce the behaviour that's in everyone's best interest as a law.
Oh, I totally agree. When it comes to a fishing community surrounding a lake, having rules in place to limit the number of fish each fisherman can catch benefits everyone.
But the same principle applies to me and my friends eating cookies. Each of us is incentivized to eat as many as we can - because its a limited resource and they're delicious. But in this case we use social, not legal consequences to punish defectors. If my friends see me eat all the cookies, they will be angry with me. This is a strong enough system to keep us honest.
But imagine if we adding a law here, making it a crime to eat all the cookies. Would that improve our behaviour? No. We might not want to share cookies at all if its possible one of us could go to jail for it.
I think laws are often appropriate. But laws are a blunt instrument. They can't be our only tool to protect the commons.
> It’s bad for society to codify too much behaviour in law.
The issue with over-codification is one of the complexity in the laws that result - not that a large number of prohibitions is actually damaging to society. If too many laws exist then enforcement becomes intractable, arbitrary and unjust - but if enforcement could be sanely and fairly dealt out then there are lots of things that we'd appreciate being laws - i.e. sniping someone's parking spot while they're pulling in: it's a dangerous action that encourages people to park faster than they're comfortable and generally makes people act like assholes... but is it worth paying someone 50k/year to prevent sniping parking spots? Nope.
> if enforcement could be sanely and fairly dealt out then there are lots of things that we'd appreciate being laws
I'm hearing a hypothesis - that if enforcement was cheap and easy, we could make a better legal system (and a better society) by regulating far more human interaction. Presumably we'd have AI systems monitoring everyone at all times, and issuing automatic fines if you snipe someone's parking spot.
"Citizen #421 you have been found guilty by AI of the crimes of eating the last cookie, and failing to call your mother while overseas. Please proceed immediately to the nearest reeducation facility for behavior correction"
I agree that there is often marginal (and proximate) benefit in having more laws. I'm in favor of FB being regulated somehow. But my point is that more laws - even when restricting antisocial behaviour - can still create a worse society. The reason is not just because enforcement is arbitrary.
Another example of this is the controversial Canadian bill C16, which sought to make it essentially illegal to misgender someone. Critics of the bill argued that despite everyone agreeing that its extremely disrespectful to misgender someone, it still shouldn't be illegal. This is a subtle argument, but its an important one if we want our societies to stay free and healthy.
Facebook is governed by many laws and it flouts a lot of them. For example, they screw up the DMCA process continuously in a way that by statute should make them lose their safe harbor (no "Section 230 reform" needed). Just by sheer volume of mistakes related to processing DMCA notices and counternotices they should lose their safe harbor protection. Even though surely many of the mistakes are unintentional human error, it doesn't actually matter that much according to the wording of the law.
They allow the hosting of a lot of egregious criminal activity, including maintaining uncountable numbers of what the law would define as notorious marketplaces for criminal activity. The trouble is lack of enforcement. You cannot fix a lack of enforcement with new laws that will just go unenforced.
(arguably eating cookies that aren't yours is a crime, and I don't doubt that someone has in the past been arrested for it in ridiculous circumstances)
Then there is the case of things are illegal, but are not enforced. Leading to the question of what is the law? What is written, or how it is enforced?
How many of you went above the speed limit today?
I suspect that much of what goes on in the stock market is similar.
>Then there is the case of things are illegal, but are not enforced. Leading to the question of what is the law? What is written, or how it is enforced?
Ah, I really appreciate this point. I imagine when it comes to law, I would want the goal to be the alignment of what is written and what is enforced. If law is what is written but not enforced, then it seems to be law-in-name-only, whereas if it's enforced but not written, then it seems to be, I don't know, chaotic? Haha, I can't think of a better term right now. I suppose the latter might depend on how much unwritten agreement there is amongst people on the rules. If a lot of agreement, then maybe it's more like ethics. If not much agreement, maybe it's more unpredictable, almost lawless.
Having driven in quite a few different jurisdictions over the years, my impression became: The safest driving speed is the one that blends with local driving culture. In some places, that’s well above the posted limits, and in others it’s quite a bit below.
I suspect that degrees of being generally law abiding also vary across cultures.
AFAIK SEC laws and regulations about misrepresentation are only sporadically enforced to encourage compliance by example. Look at what Musk and his companies have gotten away with. Of course, I am all for these disclosures, which of course FB will pay their way out of without admitting wrongdoing. Because corporations manage our government, not the other way around.
If a poem (or book) makes 10% of its readers more likely to become geniuses and contribute to solving world problems such as cancer, but 0.1% of its readers are more likely to commit suicide, should that book be banned by law?
Today's online society is based on posts created by content creators around the world, where algorithms can barely scratch the surface at interpreting their content, humans don't scale in reviewing every post, but statistics such as the above could be arguably inferred easily based on a combination of engagement (click/scrolls) data and attrition/session-revisits numbers.
Which is really problematic, because codifying into law rules and punishments based on aggregated outcomes and impact to us as a society (or to society sub-segments such as teens) makes it a very hard process to navigate between censorship vs. positive overall outcome vs. specific negative outcome on some outliers.
Looks like you are willfully ignoring Facebook’s own findings. They know that polarizing content is more engaging yet harmful… and they choose to amplify it anyway.
The same old argument that it’s hard therefore let’s not do anything is not applicable.
Facebook is not a neutral platform that just shows all posts from your friends in a chronological order. They are actively manipulating the stream and are fully responsible for what you consume.
> Facebook [clipped] are fully responsible for what you consume.
I'm not sure how deeply you hold this belief, but I am concerned to see so many people push all blame from their own actions. While it may be true that Facebook is largely responsible for what is consumed * on Facebook *, individuals are largely responsible for consuming Facebook.
this is true and if you're going to put Facebook in the spotlight you're going to have to put a light on everyone else. The entire computer gaming industry is one big dopamine cartel. If the facebook addiction is such a big deal then it's a little ironic gaming hasn't been completely dismantled.
//edit: honestly i think politics are a little at play here. Facebook (these days) is used heavily by an older more conservative crowd and i think it's irritating to the other side
That's true, but does my mother understand what's really going on? Do you? Do I? Choosing to pick up the phone and call your daughter and choosing to go on Facebook is very different and people growing up with the former might not realize how different the latter really is.
I think they fall into more responsibility here because they’ve also designed it to be addictive. If Facebook was easier to quit, I’d hold individuals more accountable.
> While it may be true that Facebook is largely responsible for what is consumed * on Facebook *, individuals are largely responsible for consuming Facebook.
I don't see any shift of blame. Those two aspects are in no way mutually exclusive. Facebook can be 100% at fault for their manipulation and deliberate outrage generation and you can still blame an individual for being irresponsible with their social media usage.
Because they are- they actively filter out rational and positive contributing individuals from the public plaza. They remove all the good people from the world and give the bad ones a stick for leverage and a hose to spray the neighbourhood down in all caps.
I think they do. When you only see post about how vaccine cause autism, anectode about this and that person and the diseases they got from the vaccine and that on top of that the vaccine doesn't even prevent the disease it was designed against, then it becomes reasonable to become antivax.
And if effectively Facebook knowingly choose, through their algorithm parameters selection, to promote this material as it increases engagement more than reasonable content, then yes, I think they should at least be partly held responsible for the harm caused by the anti vaccine movement.
Walmart is "manipulating" the placement of products on the shelf so that it's more likely for you to engage in bulk buying when you visit their stores.
Both Facebook and Walmart have a fiduciary duty to their shareholders to create value for them.
The difference is that, with user generated content, the idea of black and white "bounds" of the law is no longer applicable and you have to devise a system of checks and balances based on probabilities.
You can consider 10'000 posts for offline analysis: give them to some human raters and decide retrospectively what engagement and thoughts (positive/negative) are they generating in teens, which should enable you to draw some statistics about the expected average outcome. This doesn't mean it's either scalable or economically feasible to do so in real time for every post (so you cannot take decisions based on something that doesn't exist at the individual post level).
You can have multiple algorithms, send all of them to human raters and get for each algorithm some aggregated behaviour, but then we're back to the book question above -- what ratio of positive vs negative outcome in outliers is acceptable, and how do you define a "legal"/"allowed" algorithm?
I am baffled by this display of lack of ethics. Do we need a Walmart comparison to put Facebook’s action in perspective? Facebook - by its own acknowledgement - negatively affects teenage mental health and the democratic processes in many countries. Do you see how different this is from selling more mayonnaise jars in Walmart?
Facebook doesn’t have a duty to manipulate content. This is a very weak excuse that works mostly for people directly benefiting from the situation. Didn’t cigarette companies have a duty to maximize profits? Pharma companies pushing accessible opioids? Is that a more apt analogy?
> Facebook - by its own acknowledgement - negatively affects teenage mental health and the democratic processes in many countries. Do you see how different this is from selling more mayonnaise jars in Walmart?
Replace mental health with physical health and you have a great argument against how food is produced, marketed, and sold. We tackled these issues first with tobacco, and food wouldn't be a bad place to turn our attention after the social media companies.
Corporations are ruthless, inhuman optimization engines. When we don't sufficiently constrain the problems we ask them to solve, we get grotesque, inhuman solutions, like turning healthy desires into harmful addictions.
I would also have OP consider that yes, maybe having corporations like Nestle, CocaCola, etc that prioritize profit above all else is, in fact, also bad. Like, lets be real here, if the CEO of Coke had a button that could double the consumption of Coke products in the USA he would definitely push it, despite the fact that hundreds of thousands of people would become more obese and live worse, shorter lives. Advertising is an attempt at such a button.
The following has been used for sure in order to commit crimes and fiddle with democracy: Verizon phone conversations, Gmail discussions, Twitter, Snapchat or Tiktok messages etc.
Nobody wakes up and says "let's be unethical today", but rather, it's the reality of life with user generated content platforms, that either you get both outcomes, or you get none.
The discussion is about making people realize that the "technology" to keep only the good parts (without the downsides) wasn't invented yet.
Hence we're in a position to argue whether it would be more ethical to shutdown / censor everything, or have fruitful discussions on how to emphasize the good outcomes over the bad ones with the current tech (by first understanding it, something that politicians seem to be very bad at, or show little interest in it compared to the negative FB sentiment engagement they're generating in their voters -- ironic :) ).
You're presenting a false dichotomy. We don't have to choose between unethical corporate actions or no social media at all. Facebook could exist quite happily without applying any content selection algorithms to your feed. If your feed was literally just a chronological list of posts by your friends, with some interspersed advertising, then they (and you) could claim with some legitimacy that they aren't responsible for any fundamental negative effects of social media.
That's not the situation we're in. In addition to social media presenting some issues around public discourse and misinformation, Facebook is actively encouraging more and more extreme engagement with their platform by explicitly selecting for polarising content. It's this second part that people are taking issue with.
By the way, the solution does not require any censorship (as you mention in your comment) but simply that Facebook stops actively selecting content for your feed (which is itself a form of censorship!)
Nobody? Give it a rest. We're not dumb enough to think everyone in technology, specifically ad tech is ethical by default. Facebook made their own bed and made the mistake of allowing the internal research out of the closed corporate box. They can mitigate the impact of their most engaged content but it would be to their own fiscal detriment which is why they fundamentally decide not to mitigate it.
My regular reminder that there is no fiduciary duty to behave unethically. Fiduciary duty is a class of highly specific legal obligations on directors to act attentively and not put their own financial interests above those of shareholders. It is not an obligation to maximise return on investment.
Walmart doesn’t stock land mines, rocket launchers, anthrax, or many other items harmful to democracy and society on its shelves, even though I’m sure it could make a lot of money selling such items.
> Both Facebook and Walmart have a fiduciary duty to their shareholders to create value for them.
I feel like the more this claim is repeated, the more pushback you're going to see against it - and rightly so.
We need to remember that corporations are themselves fictitious legal entities. They only exist because society wills them into existence, and it can do so with arbitrary strings attached - there's no natural right to form a corporation. So, if it turns out that "fiduciary duty to their shareholders to create value" inevitably leads to the abusive megacorp clusterfuck that we are seeing today, why should we be clinging to it?
It’s puzzling how many people are so ready to mask their own responsibility by shifting it to a legal entity that apparently now has a duty to do whatever it takes to generate more profit. As if individually these people wouldn’t act in unethical ways but once they put on the “I am a corporation” mask anything goes.
Whataboutism advances no discussion. Either Facebook's problems are discussed based on Facebook's circumstances and decisions and consequences, or we're better off not posting any message at all.
Comparisons, analogies, and metaphors are useful tools to increase understanding and draw parallels to ideas that are challenging to navigate and naturally, lead to a variety of thoughtful outcomes or interpretations.
Crying "whataboutism" is as fruitless as you've described above. It is often used to steer a conversation towards a single direction of bias when those comparisons lead to inconvenient conclusions/possibilities that fall outside of what the person claiming it has accepted. Just sayin'. ;)
> Comparisons, analogies, and metaphors are useful tools (...)
Whataboutism is neither. It's a logical fallacy employed to avoid discussing the problem or address issues by trying to distract and deflect the attention to irrelevant and completely unrelated subjects.
I found it an apt comparison, highlighting how something we might accept in physical space (Walmart) yet be critical of equivalent action in the online space. It’s a thoughtful and coherent argument, even if one disagrees with it, not whataboutism
Let's try to phrase it in an actionable way for the law-makers to act upon it.
Are you suggesting that any profitable company hosting user-submitted content should invest all the profits in moderation teams to the point where they are either a) becoming profit-neutral or b) all the relevant content has been reviewed by a human moderator?
And how do you define relevant content -- having had 50 views? 10 views? 1 view? Who should decide where to set these limits? Do we believe politicians are going to do a better job at it rather than the existing situation? Or should we ban any non-human reviewed post just to move the certainty of illegal posts removals from 99.9% to 99.99%? (humans do make mistakes too)
(Facebook is really big so having just 99.99% of posts in compliance still means an awful amount of them escaping the system undetected)
> Are you suggesting that any profitable company hosting user-submitted content should invest all the profits in moderation teams to the point where they are either a) becoming profit-neutral or b) all the relevant content has been reviewed by a human moderator?
Yes, obviously. Why should a company get to profit from sex traffic or any other such content on their platform, just because it would cost money to take it down?
I know that somebody is raping someone in NYC right now and somebody will be killed in Chicago by the end of the day today. Should we ban the cities or at least force them to spend all their budget on security? Or set up curfew for citizens? May be public hanging a la Taliban - those definitely reduce crime.
Humans are using FB and where you have humans they commit crimes. Trying to eradicate all crime when you have humans in the loop is generally not great idea. Besides fighting trafficking/sex slavery with very few exceptions generally means harassing women with zero benefit to society or reduction in actual sex crimes.
Would you agree that it would be wrong for telephone companies to amplify sex slavery conversations? Like they would call you directly and just let you participate in the conversation because that would generate more engagement?
That is a very good counter point. I haven't read this facebook story yet, but I am willing to assume for argument that describes what happened. I guess it would depend for me on whether people saw sex-slavery content and decided to amplify it, vs an algorithm that finds and promotes "engaging" things without being very smart about what they are.
How are you defining "amplification"? Phones already operate by complex signal amplification over long distances. Why do you think burner phones are still prevalent for all manner of illicit activity?
I don't think the phone company should be shut down because others can use it in a way that's considered devious. I don't think the phone company should play "morality police" either. I simply expect the phone company to simply provide the service I paid for.
This type of thinking strikes as the kind that would damn Gutenberg for inventing the movable-type printing press because print has been used to disseminate propaganda and debauchery to billions of people several centuries later.
Amplification not in the electrical signal amplification sense but rather in the sense of amplifying the message. Facebook is giving more visibility to content that it considers more engaging, even if that content leads to harmful outcomes (it’s own research proves that).
You were making a point regarding phone-operated sex trafficking. Your characterization of what the phone company should do was what I contended. While, I'm aware that this was made as a broader point regarding Facebook, amplifying a signal and amplifying a message isn't functionally different. Television is an example of where both are happening. Even Twitter and Tiktok engage in amplification every time there's some Tide-pod Challenge. I don't see why Facebook would have to be responsible for how people feel about themselves, what stunts bad actors pull.
Right. In the case of phone-operated sex trafficking I don't think amplification is even an option. It's not like phone companies are deciding what phone calls you should be receiving today and are lining them up for you to take part in. So they don't involve algorithmic manipulation (or optimization for engagement), unlike Facebook or other social media.
In my parent post I was giving an example of an absurd imaginary situation with phone companies attempting to amplify sex trafficking by directly deciding who will participate in the conversation for the purpose of increasing engagement.
When phone companies came into existence, that's exactly what they did -- they amplified such conversations by making it easier for people to have phone calls and talk at a distance of each other.
They also got amplified whenever long distance calls got cheaper (as the overall volume of conversations increased).
> If a poem (or book) makes 10% of its readers more likely to become geniuses and contribute to solving world problems such as cancer, but 0.1% of its readers are more likely to commit suicide, should that book be banned by law?
I really don't know the answer. I've struggled with this tradeoff myself, as I've built some tools that have powerfully impacted people on an emotional level and I've been hesitant to put them out there because of the severe damage they might do to a small percentage of the population.
That being said, I read a few essays a few years back about Frankenstein and this from a Q&A[0] in the same Slate series[1] that I try to remember when I think about creating such tools:
> Does that make it into a warning against playing God?
> It’s probably a mistake to suggest that the novel is just a critique of those who would usurp the divine mantle. Instead, you can read it as a warning about the ways that technologists fall short of their ambitions, even in their greatest moments of triumph.
> Look at what happens in the novel: After bringing his creature to life, Frankenstein effectively abandons it. Later, when it entreats him to grant it the rights it thinks it deserves, he refuses. Only then—after he reneges on his responsibilities—does his creation really go bad. We all know that Frankenstein is the doctor and his creation is the monster, but to some extent it’s the doctor himself who’s made monstrous by his inability to take responsibility for what he’s wrought.
I try to remind myself of this lesson that perhaps it's not about not creating powerful things but trying to continue to maintain these things instead of abandoning them and just accepting the havoc they wreaked.
I think where I often feel the most frustrated is in believing that FB doesn't really seem to me to be trying that hard in 1) making the platform less addictive, 2) getting rid of bots, 3) suggesting which legislation they want (instead of just punting and say "we said we want regulation, it's your job Congress to create it"), etc.
I don't think they'll ever get rid of all the things that cause harm, as it's even hard to choose dinner for a party of 4 where someone won't get hurt or angry, and this is a scale almost 1 billion times larger. I just want to have the impression that they are trying, or at the bare minimum, that they have the courage to say that sometimes bad things come with the good and they openly say that they are choosing that tradeoff. Maybe they've said it that way, I just don't seem to trust them much in terms of trying to take responsibility for their creation.
That is the problem of following the absolute minimum standards in life - we (the society as a whole) have accepted that as long as businesses follow the law, its all good. We've accepted that the sole purpose of businesses is to make money within the bounds of the law. While this makes sense logically, it isn't good for anyone in the long run, practically. Also remember that all kinds of unfair laws can be passed, if you have enough money to buy politicians.
We should strive for higher standards, but who am I kidding - we live in a world of "greed is good" mantra.
> the sole purpose of businesses is to make money within the bounds of the law.
It's worse than that. Businesses constantly make risk/reward/punishment tradeoffs and will flout the law according to their estimation of the risk of being caught and paying fines. There are precious few illegal behaviors that bubble up to criminal charges for executives, so the risk is quantifiable in dollar amounts.
The problem is that without laws, what would those "standards" be?
You have to do more than the law? But how does any business in the future know what that is?
The great things about law are predictability and flexibility. If the law is not enough, we can change it. Then everyone is held to that new standard. But having a standard that is not laid out, is the same thing as having no standard at all.
Going into an area where we say companies have to meet an unwritten ethical and/or moral standard is ripe for abuse. Under those conditions, if I show certain messages or ads on my website that are wholly unethical and immoral, but not necessarily illegal in the written law, I'm opening myself up to liability based on violating unstated ethics and morals.
You seem to be misunderstanding what I was saying (or I wasn't clear). I am not at all saying we shouldn't have laws - we absolutely should have laws and rules. I am not saying that we should sue companies based on unstated ethics or morals (how would that even work anyway?). All I am saying is we should, as a society, have a better attitude and higher standards than stuff like greed is good, the sole purpose of a business is make profit, even at the expense of everything else etc etc.
I fully understand this sounds idealistic and maybe it is dumb to expect people to do better, when much of humanity is trying to do the minimum and get the maximum in return.
Humanity isn't that selfish. It is idealistic but to even have the conversation to know why is minimally ethical.
Taking the converse argument of 0 ethics except THE LAW is infuriatingly common. Is it okay to murder so long as the act of murder is technically legal? Assuming perfect proof intent with direct action indefensible confession murder. But also 100% legal. Breaking no law. Would society find that acceptable? Even for those that used the loop hole?
It seems like folks want to live in a 0 common ethical baseline reality. We're discussing the middle and if Facebook is wrong. Not if they will get away with it. They will. And only because they get away with it does not make it right. Can we stop with the definition arguments of legality or the accountability of large corporations for a moment?
Is knowingly proceeding with a damaging action acceptable? One could even argue social media isn't damaging. The study is wrong and Facebook paid for it. Not me. But at least it's not this manifest destiny morality bullshit.
>It seems like folks want to live in a 0 common ethical baseline reality.
We do live in a world zero common ethics except where these ethics pertain to the laws of physics. Anything else is determined by societal dictate by way of law or cultural fiat. If Facebook were in Saudi Arabia, there wouldn't a be a rainbow flag filter and accounts would be shadow banned for any mention of Khassogi's murder.
You cannot follow the maximum standards, because you only have so many resources. We can sort our 'recycling' and go through 'security theater' at the airport but those involve necessary trade-offs to real care for the environment or security.
I don't mind when wealth and education allows us to voluntarily do better or more. But there's danger in punishing people using hindsight.
There is another aspect about how new the industry is. It will be some time before we have well-considered laws about things like digital privacy. Tech companies can either advocate for good laws based on their advanced knowledge or be greedy and exploit/promote lackluster laws.
Treat addictive social media companies like addictive cigarette companies. Lets see some huge warning labels about how mentally harmful it is to continue scrolling on facebook right on the first result where its unavoidable to see. Lets tax the hell out of social media companies to generate local revenue just like sin taxes. It won't be a huge change but it will be a great starting point and will come with revenue that can fund potentially mental healthcare programs for people damaged by these companies.
> Lets tax the hell out of social media companies to generate local revenue just like sin taxes.
Very interesting idea, actually. There is evidence Social Media causes harm to some individuals' mental health (in a widespread manner causing some measurable societal harm), so a proposed tax on all social media companies with revenue going towards mental health programs seems worth exploring.
Generally I'm not much in favor of implementing new taxes (would rather close existing loopholes) but if implemented reasonably and backed by scientific evidence this seems valid.
That's because, so far, they've managed to deflect, deny, and discredit research and critics pointing out exactly how social media uses things like variable rewards in the same way as slot machines use them to keep gamblers pulling the lever. They do this using tactics developed by the tobacco companies to fight findings that smoking causes cancer and other harms and refined by the fossil fuel industry to prevent action on global warming.
I agree with you but a lot of the analogies and metaphors here are insufficiently subtle.
FB in some sense, but not entirely, is a form of speech, no better or worse than Grand Theft Auto or the National Enquirer. That's how I thought of it ten years ago.
Now that it is in our pockets nearly cradle to grave; a monopoly; and dependent on minutes of engagement rather than subscriptions -- it is a different animal altogether.
Yet with all those things we have laws and regulations and even restrictions for young people explicitly. FB is the wild west on the other hand and constantly lobbies to keep it that way in terms of how regulators see it.
Yes, thats the buried lede. Those are all things which you need to be old or mature enough to use responsibly - they make demands of experience and impulse control you develop as adults.
Meaning that blocking social media for kids and teens is likely on the anvil at some point.
> Uhm a cigarette you cannot change the ingredients of, it's tobacco.
You can soak the tobacco in solution which contains additives, such as more nicotine. Which is exactly what cigarette companies have done in the past (and not just the tobacco, the filters, and the paper as well).
The parallel here is filling people's feeds with divisive political news and posts, even when they have tried to opt out.
The point is tobacco itself is a carcinogen, you cannot make a cig not cause cancer because it needs to burn tobacco at least.
A social media website does not need doom scrolling or private algorithms for the feed, you can change how it works instead of adding a useless banner.
1) What would you expect be implemented to reduce/eradicate doom scrolling?
2) What would making the algorithm public do for us? I'm not an ML engineer, but presumably their algorithm isn't just an algebraic equation where x is how toxic the post is and y is how inflammatory it is and y is the number of kids who will think harder about suicide because of the post.
Maybe I'm just super naive and that _is_ how Facebook made their algorithm, but my understanding is that the algorithm is a little more of a black-box and is a little abstract. How is a lay-person supposed to evaluate something like that?
The input to these algorithms are usually human understandable and quantifiable signals like likes, text sentiment, maybe engagement history -- and the output is probably a score than can be ranked. Ultimately though even if the algorithm is a black box (entirely possible it's not ML based!) we can still evaluate it in a lab environment.
Some of the signals might be generated by ML also, like photo labels, but ultimately these things are very understandable if you have the model and data.
I don't want them to do anything regarding doom scrolling, it was just an example and came from another user.
I do want them to publish their algorithms and moderation logs so we have insight on how they are serving and moderating content.
I don't care about organic user content, I do care if FB is pulling the strings to make it either more salacious or being biased in one way or another.
I also care if they are banning certain users or content but not others.
Sorry for being pedantic, but you can absolutely change the ingredients of a cigarette. There's a ton besides the tobacco. And you can breed different strains of tobacco to have more or less of some chemical.
Smoking was happening in the 1800's. Lung Cancer rates didn't shoot up until the 1900's, it was rather rare. This is around the same time that tobacco companies figured out they could soak tobacco in ammonia. This allowed for inhalation into the lungs (e.g. it sucks to inflate a cigar deep into your lungs). It also made the cigarettes much more addicting, so people smoked way more and inhaled into the lungs. That's about when lung cancer stopped being so rare.
Yes, cig's cause cancer, but to say that it's because it burns tobacco is missing a big part of the story.
That was probably because smoking was not common outside of wealthy men during the 1800s. It was not widespread at all among most of the public until after the world wars thanks to mass produced cigarettes (which weren't around until the late 1800s) now being added to rations. Smoking rate after WWI increased 350% and was high ever since. US government didn't stop issuing cigarette rations to soldiers until 1975. Lung cancer rates have followed lock step with smoking rates, its not really that smoking suddenly became harmful. It always was, it just wasn't common to smoke and even among those who did back in those days, it wasn't common to smoke very much at all and certainly not around the clock (kinda like hookah users today).
My nintendo DS from 15 years ago gave me an eye strain warning every time i started it up and it doesn't always cause eye strain, only misuse does and I know that thanks to the informative banner.
I love that Nintendo is very aware of the potential negative effects of their products and games and tries to inform users / mitigate.
Even when it comes to encouraging positive play between users - in the new Pokemon MOBA (games known for their toxicity) there's no text chat, only communication with a few emotes you can show. Some of their decisions make for arguably worse games for "hardcore" gamers (like the way they rank users in smash, or how they focus on more casual-style in-game tournaments or make matchmaking harder) but they sacrifice that in favor of a more positive general experience, especially important since children play their games.
I thought pictochat was great too and a lot of fun. They could have opened it up and made it into a global network, but the beauty is that it operates on local networks so it was more of an in person social network, plus no way for advertisers and commercial companies to break in.
I remember pictochat, so many dicks and graphic drawings sent to each other in JR high. The sensitive world today would have had a field day with that.
This part of the thread went pretty off topic but I like it! Pictochat was certainly ahead of its time, wish we stuck to things like that.
Moreover, warnings are useless if people can't vote with their feet. So if you want to actually affect change in the dynamics of the market you need to make services compete on quality and value to the customer rather than engaging in a scramble to accrue insurmountable network effects and lock-in.
That means mandates for data interoperability. Sadly, I have no idea how to implement that in a way that doesn't utterly stifle innovation by ossifying what sorts of data models social media is allowed to have. But at the very least we could create a sort of interoperability minimum that prevents you from locking up things like photo albums or peoples' "social graphs."
Over the longer term I'd like to see some kind of disentanglement of the protocols, standards, and data models from the front-end clients. It's obviously a lot more complicated now, but in the same way that you could access AIM, ICQ, GChat, and a bunch of other stuff from a variety of chat clients it would be good to be able to do this with everything social. Hell, ActivityPub basically tries to do this now so it's not impossible.
What is the appropriate middle though? Think about alcohol culture. Should we ban beer commercials on TV? Only allow beer commercials with talking frogs rather than attractive young people having fun?
I'm honestly baffled over why beer commercials are considered socially acceptable - but then again I think that advertising (in our modern interconnected world) only ever serves to drive overconsumption. If you want a beer - you go to the bevy and pick out a beer you'd enjoy... if I'm watching TV and the TV tries to make me want a beer - that's not a good thing.
Good advertising[1] is limited to making sure your product is visible in comparison to competitors - having shiny cereal boxes is something I find pretty meh, but in the cereal aisle you're dealing with someone who wants to buy some kind of cereal and you're trying to convince them to buy yours. TV Advertising drives up demand for products which, by definition, means we're consuming more of that product than we otherwise would... that's great for business... and it's also great for the obesity epidemic.
1. What I'd consider to be ethical advertising, but that's like my opinion man.
I agree but I'd imagine the beer companies would argue with your substitution standard for advertising that any drink or even consumable product you can put in your mouth is a competing product. So then no drink commercials, and you've reduced the capital available to fund TV, and you have a domino elimination of economic activity.
I mean, ideally we could allocate all the capital we put into manufacturing, selling, and consuming beer into fitness or math education or something more harmonious with wellness and human achievement; but hey, plenty of great scientists and inventors love beer.
What is the specific harm involved here that is deserving to be taxed?
How would we measure this harm in order to know how much to tax a given company?
Should other causes of this harm be taxed/penalized as well? If not, why?
For instance, if the harm in question is some people feel varying degrees of worse after using a given product, is there any limit we as a society should set on penalizing the cause of the harm?
Should people or entities who say things that make people feel worse be fined/prosecuted by the law? If I feel worse (let's call this 'trauma' or 'anxiety' or 'depression' or 'literally shaking' or 'panic attack') after reading a book or reading a news site, should I have standing to sue the creators and medium which presents said content?
You tax Facebook but allow it to operate however it wants. Facebook is then incentivized to double down on its algorithms---like tobacco companies using chemical and biological techniques to make cigarettes more addictive---in order to regain the lost profits.
Then you can double down on the taxes you levy against them if they begin harming more people, no? The idea is the cost of doing bad business will eventually be too much to make it worth doing that sort of bad business. Same idea with carbon taxes where the costs scale to damage and incentivize shifting to good behavior rather than doubling down on bad behavior. And even with cigarette companies doubling down, far fewer people smoke today and die of lung cancer than 50 years ago, so this stuff works on the whole.
That definitely isn’t what happened with alcohol or tobacco! Instead you end up with a significant enough amount of money going to the government that the government now ends up protecting those industries to an extend - ensuring lower priced competition (e-cigs, moonshine) get stomped on and the market gets protected and not eliminated or reduced too much.
Warning labels won't be of much use, as an individual most will ignore them believing in there own prowess to discern truth.
Taxing all social media or all media may have interesting implications as this again will reduce profit for all and give some revenue for governments without making any actual change. Also people making cooking/educational videos on Youtube may resent having to pay sin tax.
It doesn't notify you when someone responds to one of your posts. It doesn't send you any nagging emails (indeed, doesn't even require an email to sign up). It has the noprocrast setting to let you set limits on your own usage. It doesn't (afaik) try to optimize for engagement - dang tries to maintain civil discourse as much as possible.
You are are able to consume both tobacco and alcohol (let's not tangent into a drug legalization discussion). Tobacco and alcohol cause measurable societal harm and measurable costs to the state - are you implying it's unreasonable for states to tax these goods for those reasons?
Generally speaking I'd rather reduce taxes but I fail to see what's wrong with e.g. an alcohol excise tax going towards rehabilitation and/or highway safety programs. "Sin tax" is just a colloquial name for an excise tax, which a state has every right to enact.
> Tobacco and alcohol cause measurable societal harm
And if I choose to smoke in the privacy of my own home (or yard)? What societal harm am I causing?
As for alcohol, the societal harm caused is a laundry list of already illegal behaviors that are illegal regardless of alcohol's involvement with the exception of sin tax avoidance.
Why not outlaw the societal harm instead?
> e.g. an alcohol excise tax going towards rehabilitation and/or highway safety programs
Both of those seem like good things regardless don't they? Why do we need a special tax on alcohol for things that are generally good? It's not like only people who consume alcohol are the only ones who need rehab or they're the only problem with highway safety.
Does the tobacco tax go toward lung cancer patients? It actually goes towards funding campaigns that overstate (ie, lie) about the dangers of smoking to the point that people vastly overestimate the dangers of smoking [1].
> Sin tax" is just a colloquial name for an excise tax, which a state has every right to enact.
Of course it's legal, it's just garbage policy. Sin taxes come from the pairing politicians wanting more money and pearl clutching interest groups pleading to think about the children.
Unfortunately it's not so simple. An individual's smoking and alcohol use can and does harm others, and the state levies excise taxes for that reason.
Another example is driving a car, which results in thousands of fatalities and many more injuries daily. Not to mention environmental impacts which affect others. The state chooses to require drivers to have insurance and their cars to pass smog tests, rather than outlawing driving.
> An individual's smoking and alcohol use can and does harm others, and the state levies excise taxes for that reason.
Smoking and alcohol use can also not harm others. Should those who smoke and drink responsibly be held responsible for those who don't? How does the tax ameliorate those harms?
For everyone responding that smokers cost the government money, it is actually the opposite in that they save the government money because on average they die sooner. From the manning study: "In this analysis, the federal government saves about $29 billion per year in net health and retirement costs (accounting for effects on tax payments). These include a saving in retirement (largely social security benefits) of about $40 billion and in nursing home costs (largely medicaid) of about $8 billion. Costs include about $7 billion for medical care under 65 and about $2 billion for medical care over 65; the remaining $10 billion cost is the loss in contributions to social security and general revenues that fund medicaid. "
Presumably COVID also saves the government money, then? It mostly kills the old who have already paid into the tax system their whole working lives and are now drawing from it. And it mostly kills the chronically ill who need more tax support than they contribute. It seems terribly cold and callous to look at it this way though, e.g. when a son is holding his mom's hand in the hospital who is dying of lung cancer, to go up to the son and tap his shoulder and whisper, "Hey kid, cheer up, uncle Sam saved $8 bil on medicaid nursing home costs 'cus mommy here couldn't stop sucking nicotine sticks."
> Presumably COVID also saves the government money, then?
It most certainly does. The retort your parent made is for those who make the argument that a tax is necessary because X (smoking, in this case) costs the country economically.
If you want to make the purely _economic_ argument, it's a benefit to the bottom line.
That's not what a sin tax does. You are still free to smoke cigarettes or drink alcohol, and were we to tax social media usage, you would still be free to use or not use that.
But a sin tax ostensibly accounts for the economic externality*. We know that cigarettes impose a cost on society beyond the individual smoker. I'm all in favor of making people pay for things that we know cause damage to society more broadly. And I hardly think it's controversial that social media is in many aspects harmful to society.
*Sin taxes are technically different than pigovian taxes, but I and I think most people tend to use the terms interchangeably.
> We know that cigarettes impose a cost on society beyond the individual smoker
What's that cost?
From what I've read, all economic costs smokers impose on society are more than made up for in their dying early, they actually cost less [1]. I guess everyone should smoke to save the state money!
Neither you or the government/state decides what you get to do with your mind. An advertising company decides what to do with it and can manipulate it however it decides best benefits itself. Not you, not society, Facebook, what makes Facebook the most money.
It’s not the state “deciding” it’s the state requiring compensation for the negative externalities created by the product. You’re more than welcome to smoke cigarettes if you so chose. But that decision isn’t made in a vacuum and it impacts the rest of us in the form of increased public health burden, insurance costs, secondhand smoke, etc. A “sin tax” serves not only to discourage the asocial behavior (we’d have a big problem if everyone made the same choice) but also to pay your fair share of the costs of your decision.
Curious, do you not wear seatbelts too? Opt for asbestos insulation since its better than anything on the market today? Plumb your home with lead since its more durable and flexible? Use leaded gas because its better for your older engine?
The state acts on the collective when the public is not making good decisions for themselves and causing net harm onto themselves, usually with the public paying the price. Sometimes thats overt like with death rates from accidents without seatbelts, or cancer from asbestos exposure. Sometimes its less overt like the behavioral issues, increased incidents of mental illness, and crime rate increases from leaded pipes and gasoline.
I'm willing to bet social media causes net harm. It hasn't enabled communication that wasn't possible before; if you can get access to a facebook account you therefore have email and access to irc. But it has cost probably trillions in productivity from people staring at it so much during all their idle time, and the cost to treat mental health issues that wouldn't have cropped up without toxic social media culture.
I say we have these companies pay for these externalities if they are forcing us to pay for them otherwise. By not passing a tax on externalities like this, the state is deciding that I need to pay for facebook's ills on society whether I use the service or not, which should anger you as a libertarian as much as it angers me as someone on the left.
Not only the age restriction but there are restrictions meant to curb some abuse at least. Drunk in public is a crime, establishments technically aren't allowed to overserve patrons who are very drunk, you can get tried for manslaughter worst case if you force someone to overconsume and they die, etc.
I wear my seatbelt, I don't smoke, I don't drink and I'm vaccinated.
Everyone keeps talking about these "negative externalities" without being specific. Why not just make the societal harm illegal and let people hurt themselves without buying permission from the government?
We require driving licenses, age restrict operation of vehicles, require vehicles to operate within parameters (speed limits, gross vehicle weights) and according to standards (traffic signals and markings), and prohibit operation while under the influence of decision or reaction-impairing substances.
Because these are all statistical precursors to accidentally killing someone with a car.
Texting while driving, while illegal (everywhere by now, I assume), causes more accidents than driving under the influence (both in total numbers and, apparently, per capita). Should we tax text messages?
So you get to ignore all the responsibilities that come with your rights so you get to clog up our hospitals with your bad decisions?
How about you take full responsibility: you get to not put whatever in your body, and you agree never to take an ambulance ride or be treated by a hospital.
Don't want the vaccine? That's fine, it's your right. But now when you can't breathe, nobody coming to help.
> So you get to ignore all the responsibilities that come with your rights so
Absolutely not. I know lots of people that manage to drink alcohol responsibly. They never drink and drive, don't regularly over indulge and it makes their and their peers lives _better_.
What negative externality are they paying for with alcohol taxes?
> How about you take full responsibility: you get to not put whatever in your body, and you agree never to take an ambulance ride or be treated by a hospital.
> Don't want the vaccine? That's fine, it's your right. But now when you can't breathe, nobody coming to help.
If I pay for health insurance, I'm already taxing myself in this instance. It would be perfectly reasonable for a health insurance company to offer incentives for people to be vaccinated just like they offer incentives to non-smokers.
If the government wants to start providing that healthcare, then they can have a say in the cost of poor health decisions.
I don't think number of users is the metric that really matters. I care a lot more about market cap because money is what gives corporations greater leverage to do bad stuff.
Public algorithms, or at least some 3rd party review. Ban infinite-scroll on social media platforms. Require feeds to be configurable (users can set to "newest first" or "top picks" or whatever else). I'm sure I could come up with more, that's just off the top of my head.
These seem like awkward things to encode into law in a durable way. Laws are long-term blunt instruments, banning something like infinite scroll will have all kinds of unintended consequences.
That's true - these things might be better implemented as regulations out of the Executive branch - but that would still require legislation authorizing somebody to implement the regulations.
State governments are much more interested in participating in the cable news culture wars to pump up next election numbers then they are in actually governing. I doubt it.
Have you ever watched a teenager (or addicted adult) scroll thought their IG feed? It's disturbing. They just scroll and scroll and scroll waiting for the tiny little dopamine hits. I don't know if a "Next" button fixes it completely, but it almost has to be better, even if only marginally so.
> I would rather take the stance of feed algorithms and moderation logs be PUBLICLY available. Transparency instead of censorship.
I think that removing CDA Sec 230 protections for algorithmically-curated feeds is the answer. It's one thing to have a basic FIFO feed, it's another to hide or reveal content based on your own internal engagement/revenue targets. At the point where you're crafting bespoke engagement-maximizing feeds, you're creating a gestalt creative work that has a life of its own and that is no longer merely a passthrough for the works of users of the platform.
In other words, consider Facebook's curated feeds as Facebook's speech, not only the speech of their users, so Facebook faces liability for that speech.
I agree. Especially since political ambition won't be to curb misinformation, it will be to get control about which misinformation will be spread. Harm to users will be an excuse and like misinformation it will be quite difficult to quantify.
I considered the widely-used 'hot' algorithm and I don't think that really qualifies as a 'creative work' on the part of the site. The 'top from last hour/day/month/year' and 'hot' algorithms are really elementary sorting methods that are in the same league as FIFO.
> Who gets to decide what algorithms are simple enough to be legal? You? Why?
I'm not talking about the legality, I'm talking about the liability. If one user posts defamatory/libelous speech and Facebook decides to blast it to 1 billion users because their secret sauce algorithm found that it really cranks the revenue dial, Facebook should share some culpability in that defamation lawsuit. A basic 'hot' algorithm that's not designed by squadrons of PhDs running machine learning on user behavior analysis isn't really the same thing, culpability-wise, imho.
> basic 'hot' algorithm that's not designed by squadrons of PhDs running machine learning on user behavior analysis isn't really the same thing, culpability-wise, imho
A "hot" algorithm can blast high engagement libel in front of people as well as a fancy algorithm.
If the "hot" algorithm and the fancier algorithm are both content neutral, on what basis can you distinguish the two as a matter of law?
Does the hot algorithm become illegal if a PhD implements it? I'm at a loss about what distinction you're actually trying to draw.
Your post, like many others on this thread, isn't articulating exactly what about FB's conduct you find objectionable?
Illegal, liable, doesn't matter --- you want to use the state to drive certain kinds of ranking off the internet.
Fine. What, precisely, is the line between algorithms acceptable to you and algorithms not?
What is the precise conduct that would make liability attach to a ranking algorithm? You can emote, but you can't describe what exactly it is you would turn into a law.
> You can emote, but you can't describe what exactly it is you would turn into a law.
I thought I made it pretty clear from the outset I was talking about removing CDA Sec 230 protections for sites using bespoke (i.e. proprietary) curation algorithms for their feeds.
No, actually, it's not any of these things. It's on HN that dang applies special rules to your comments if he doesn't like you. FB's algorithm is impersonal.
Your list of criteria is legally inactionable. Neither you nor anyone else can write down what precisely it is that FB should be prohibited from doing. All I see is a bunch of unproductive rhetoric about how this is bad or that is bad. Blah blah, filter bubbles, moderation, whatever.
What is the exact definition of the criteria you would use to prohibit ranking algorithms?
> Why the hell shouldn't I be able show ads to people who want to see them?
Nobody's stopping you. If you want to show ads to people interested in football, post ads on a football site. If you want to show ads to people interested in horses, show them on a horses site. If this type of non-invasive targeting to display ads doesn't make you as much profit, well that's a you problem.
> HN is an algorithmic feed.
You seem to be struggling with the idea of a 'bespoke' algorithm where everyone's algorithm is different, informed by an internet-wide surveillance system. This is the difference between HN and FB.
Targeted ads. Targeted. As in specifically and exquisitely targeted to individual people based upon datum gathered from surveiling everyone.
Still legitimate, and maybe even encouraged, are contextual ads, banners, sponsorships, branding, swag, paid placement, influencers, misc calls to action, etc.
Please rest your concerns. Late stage capitalism will continue unimpeded after Facebook's wings are clipped. Social media will revert to less noxious enterprises, like metafilter and craigslist.
> HN is an algorithmic feed.
Opt-in. Transparent.
What is FB's equivalent to HN's /newest?
I'm not crazy about HN's ranking. It's pretty good, non-toxic, seems fair. But someone other than me is making judgement calls.
I much prefer client side (user controlled) filtering and sorting, like with any generic RSS reader.
Repeating myself: All HN visitors see the same feed. A crucial qualitative point you pointedly do not acknowledge.
Ad targeting is just a bugbear for a section of the tech activist crowd. There is nothing fundamentally wrong with a site like Twitter noticing that I tend to post a lot about cats and showing me cat ads as a result. No, it's not self-evident that targeting is bad, and no, behavioral targeting doesn't require "surveiling" everyone internet-wide.
> Late stage capitalism
I don't subscribe to the neo-Marxist worldview that uses this "late stage capitalism" frame all the time. Capitalism is the natural order of the universe, not some temporary aberration on the way to your Utopia.
> Opt-in. Transparent.
Not transparent, and hardly opt-in: HN feed ranking is the default.
> I much prefer client side (user controlled) filtering and sorting, like with any generic RSS reader.
Okay, so people seeing different feeds is good...
> Repeating myself: All HN visitors see the same feed. A crucial qualitative point you pointedly do not acknowledge.
... and now people seeing different feeds is bad.
Make up your mind.
Different people should see different feeds because they're interested in different things. There's nothing sinister or nefarious about showing people stories about topics that interest them and not showing them stories about topics that don't.
I think even you'd agree that someone should be able to opt into seeing stories about "cooking" and opt out of stories about "motorcycles". Okay, that's good, right? Now what's wrong with using ML to infer, based on what someone reads, that he's interested in "cooking" and not "motorcycles"?
This interest inference is what FB is doing. There's nothing bad about it.
I maintain that the criticism you and others are lobbying against algorithmic feeds is logically incoherent, emotionally rooted, and unworkable as public policy.
> This interest inference is what FB is doing. There's nothing bad about it.
And circling back to the original article, there is in fact something bad about it. One guy started out with a site to rate the attractiveness of women at Harvard, and now my mother is probably going to die because his customized algorithm found that showing her lots of anti-vaccine misinformation would maximize his profits.
I think that's the right approach. Legally you could require every social media company that collects and sells data on its users to advertisers to allow the users to access their internal algorithmic interface (for their own account).
Now, what controls are on the internal algorithmic dial? Apparently that's top secret, but a legal requirement to expose the interface to the users seems reasonable.
Note that this might not affect what ads you get served (that seems more on the private business side, although banning prescription pharma ads makes sense), but it would affect what shows up in your feed, what content you get served, etc. You could write your own exclude lists, for example (i.e. if you never want to see content from MSNBC, FOX, or CNN, that would be your decision - not the algorithms, etc.)
If you get too big, you can't buy your competition (e.g. FB buying IG). Or if you get too big, you have to open your stuff up like email does. Or if you lie to congress, you get penalized. Or if you get too big, you have to make your algorithms publicly available.
I think GP is referring to the fact that email overall is a system that is based on public standards and open to new entrants. You can start Hmail.com if you want, and plug into the existing email eco-system as a new competitor very easily.
The social media ecosystems aren't like that. You can't be a chat provider and plug into FB Messenger; you can't plug into Twitter, etc.
There is an open social media eco-system called the fediverse (for its federated nature), in which Mastodon is the best-known player. But it's gotten very limited traction, because of the network effect that keeps people on FB and Twitter. No such effect keeps people on Gmail.
Email, not Gmail. I can email people from my provider even if they use other providers, including people who self host. And I can get email from them too.
I would ban algorithmic targeted media -- ie no personalized feed based on an "engagement" algorithm, for social media just see a chronological feed of posts from the people you follow. This is the most addictive and radicalizing part of social media - and the most lucrative. Much like the nicotine in Big Tobacco's case.
Could I write my own algo for personal use? Could I hire someone else to write that algo? Could I share it with others? Could I start a company to sell it? If it gets to popular will you try to ban it too?
>> To clarify, can you specify exactly what law you would like made? What do you want to be done exactly?
Honestly, social media issues are for the most part a parenting issue. If you don't have access to your kids phone, or know what platforms they are on and who they are talking to and what they're sharing, I'm not sure legislating social media is going to do much of anything. New platforms will pop up, more private networks will be started and suddenly, everything becomes to fragmented to really oversee.
I would create laws that have teeth and address issues like bullying, doxxing, SWATING and other ways people weaponize social media against other people. You start to put some teeth into laws where people are facing serious consequences for bullying and pushing people to suicide, then you might see some changes.
> You start to put some teeth into laws where people are facing serious consequences for bullying and pushing people to suicide
Counterpoint: kids aren't all neurologically and socially developed enough to understand life-altering consequences for certain actions, and that's not their fault. Legal codes and law enforcement are too crude in most child-related cases, unless you're okay with incarcerating misbehaving children.
It's on adults to make sure things kids can reach are reasonably safe for—as well as from—them.
Counterpoint to your counterpoint: Almost all kids are neurologically and socially developed enough to understand they'll be picking up cigarette butts and cleaning graffiti on the weekend if they start bullying someone. It's quite common for American schools to enact "zero tolerance" policies for any physical altercation that, barring criminal charges, is the same punishment regardless of severity and irrespective of who the aggressor was. The policy is literally to give the victim the same punishment in the name of "fairness". Now there's no reason not to escalate and it incentivizes the victim to not report it. Even with the same punishment it's still skewed towards the bully as odds are, they're probably not going to care as much about a couple days of suspension and most bullies aren't going to be in any serious risk of expulsion.
School administrators don't care about bullying and teens being driven to suicide, they care more about liability than fairness. Taking the King Solomon approach in public schools is abhorrent and injust. Kids do dumb things, but it makes it so much worse when incompetence and callous disregard create perverse incentives.
In my opinion if social media is important to society the way it seems there should be a government funded social network for users and businesses. In the USA like NPR and PBS.
The problem is companies selling peoples data and optimizing the algorithms on probability. Instead what if everyone paid some taxes to have a social network which helps people interact and businesses promote themselves, you can get rid of the ads AND the algorithms. Let users customize settings which dictate the algorithm.
You better be sure the incumbents would fight tooth and nail make this look like a very unattractive idea and lobby heavily to make sure it will never happen.
> I would rather take the stance of feed algorithms and moderation logs be PUBLICLY available. Transparency instead of censorship.
Ok let’s say that’s now the case. The FB source is now open. What changed? Any negative consequences are still occurring, and if anything, we just have had actors better visibility into what to exploit.
A lot of these issues seem difficult to regulate but one that seems more realistic is usage by minors.
What if social media platforms required all minors to have their account associated with a parent account? The parent could monitor activity, institute time limits, etc.
Minors don't use FB much anyway. It's more tik-tok now. And of course no minor would use an app monitored by her parents, she will immediately switch to another app.
Sorry, should have clarified: I was suggesting that if the government decides to regulate it should apply to all social media platforms, not just FB. Updated the original comment.
People seem to dislike this suggestion, but while I believe it is unrealistic to check for age on the internet, this idea has some merit.
Platforms that harbor minors and adults together will have to have different rules than platforms just for adults. But you also cannot make everything fit for kids. Government would actually try to do just that since ensuring safety here is their mandate. So a solution must be found. Normally minors should be supervised, but that is not trivial and you don't want constant surveillance.
Make spying on people illegal, even when a computer does it to billions of people rather than one creep doing it to one person. If you have to collect info about people to provide a product or service, make it strictly illegal to transfer or sell that info or anything derived from it. Don't like it, get into another business. No one's making you collect people's info. Yes, this should apply to e.g. credit card companies, not just big tech. This'd need some fine points hammered out (don't laws always?) but it's not that crazy.
Do something to make platforms responsible when their "algorithms" promote something. Not just hosting it, but when they promote it. Don't like it? Don't curate, then, or have a human do it so you're sure nothing you're deliberately promoting is shitty enough to land you on the wrong end of a lawsuit. "But how will tech companies show every visitor a totally different home page of content they're promoting (but in no way responsible for), and how will Youtube find a way to recommend Jordan Peterson and Joe Rogan videos next to every damn thing? How will tech companies make every part of their 'experience' algorithmically-selected, personalized recommendations of content they farmed from randos?" They won't, they won't, and... they won't. You're welcome.
Make data leaks so cripplingly expensive that no company would dare hoard personal data it didn't absolutely need to get by.
Force the quasi-official credit reporting agencies not to be so shitty. In particular, "freezes" should be free and should be the default, alerts for activity should be free, and access to one's own info should be on demand at any time, not once per year per agency. Or just outlaw the bastards completely, IDGAF.
I dunno, lots of things we could do to make the current personal data free-for-all less hellish.
> This'd need some fine points hammered out (don't laws always?) but it's not that crazy.
It sounds like you're suggesting GDPR style regulation. They're still figuring out how to enforce that but generally I support it. Too much money is against it to get anything passed in the US, though.
Another problem is that the US government seems to like when the tech sector gobbles up data on people. It gives them new powers for social control.
Freedom of speech also includes from being compelled to speak of things you don't want to, so forcing companies to make their recommendation and moderation systems publicly visible would be eve more of a free speech issue than expecting companies to moderate violent, hateful, or deliberately misleading content.
I absolutely disagree but I'm upvoting anyway because it's an argument I haven't seen before with regards to making algorithms public and god knows the discourse could use some variety.
That being said. No. This is no more a free speech issue than forcing food manufacturers to make their ingredients public.
Can you cite some case law that bears out this argument? While I agree that your point is true in the most general sense, we compel companies to make their internal information public fairly regularly via various mechanisms (admittedly, none of which are 100% analogous to the FB/social media situation).
It's astounding to me that too little censorship is characterized as "antithetical to our values", "highly dubious ethically", and worthy of potential legal sanction in the top-ranked comment on HN.
Is it too little censorship or rather amplifying problematic things and suppressing heathier things because of perverse incentives? FB and Instagram timelines are not raw feeds from ones friends/follows. They are tuned by human calibrated algorithms.
Kids need to eat vegetables and lean protein sources. But if school districts instead optimize for profit they may end up feeding the kids borderline poison like sodas and candy. When companies come to dominate a public space, like huge parts of digital comms, then maybe it's OK to demand more responsible behavior of them.
> Kids need to eat vegetables and lean protein sources. But if school districts instead optimize for profit they may end up feeding the kids borderline poison like sodas and candy. When companies come to dominate a public space, like huge parts of digital comms, then maybe it's OK to demand more responsible behavior of them.
Adults are not children and social media sites are not school districts.
The school district analogy also doesn't really hold up on its own terms unless you're talking about boarding schools, which you probably aren't given the term "school district". When I was a kid, I ate at least 2 out of 3 meals at home, and more often than not, I brought a sack lunch. I know that poorer kids rely on school lunches a lot more than I did, but that's still just one meal a day. My high school actually did have a Coca-Cola machine, but I think that's old enough for kids to start making some of their own life decisions, like whether or not to have a Coke with their lunch. I mean, high school is around the same time that students start planning for their future career and/or higher education, so if you can be trusted to decide between taking vocational classes and fulfilling college admissions requirements, I think you can also be trusted to decide whether or not to drink a Coke. 14 isn't that far off from 13, which is the legal minimum age to get a social media account.
Also, unlike going to school, nobody is forced by the government to spend multiple hours a day using social media. Of course we regulate schools. We also regulate prisons to make sure that prisoners are humanely treated, or at least we're supposed to. The better analogy isn't school districts but convenience stores, in an alternate universe where children under the age of 13 were prohibited from entering convenience stores and some people were complaining that still wasn't enough.
> Adults are not children and social media sites are not school districts.
> The school district analogy also doesn't really hold up on its own terms unless you're talking about boarding schools
Non-technical adults don't understand the minutiae of algorithms or the tuning of social platforms designed to manipulate them. These platforms control how over 2 billion people communicate. Once the networks become entrenched people can become essentially locked in, or risk becoming ostracized from support groups if they don't want to play along.
What people choose to "amplify" is none of the government's business. People are allowed to be wrong. Yes, even if you think it's about something really important.
What if it is not censorship but rather we apply same standards for Facebook feed that we have for newspapers. If a newspaper publishes something false that hurts someone, they can sue newspapers because editors should have caught false information.
Facebook or social media algorithms now optimize feeds for engagement even if information is false or harmful. To me algorithms are new editors. So people who are hurt by these algorithms should be able to sue companies that run these algorithms.
If social media companies want section 230 protection, then they should not use any form of algorithms. Show everything without prioritizing anything.
Newspapers are a poor analog to social media for this exact reason. A newspaper is the same newspaper for everyone.
The better analogue to social media, at least from the user’s perspective, is direct mail. If someone mails out libelous political fundraising letters, the liability is on whoever wrote the letter, not the postal service. The only difference with social media is that social media has ranking algorithms, but that’s because the volume of social media far outstrips the volume of direct mail. Even Gmail uses algorithms to filter your various inboxes.
If social media worked the same way as direct mail, you’d basically be forced to dig through every social media post within your subscriptions at random to make sure you didn’t miss anything you wanted to see. Which means the primary effect of the algorithm is subtractive. The algorithm sometimes adds content you weren’t otherwise subscribed to, but most of the time, it works by hiding content you are subscribed to. The primary complaint seems to be that Facebook isn’t hiding enough content.
Finally, I think you’re understating the problem with mainstream media. Settling defamation lawsuits out of court, on the rare case that false and harmful news media actually crosses that line, does not even come close to undoing the initial damage. Especially since under US law, public figures—who are the primary targets of media coverage—have a much higher burden of proof to sue for defamation. (Whenever politicians propose reforming these defamation laws, these mainstream media outlets respond with passive-aggressive antics like printing “Democracy Dies In Darkness” on their mastheads. I bet Zuckerberg wishes he could get away with that kind of thing.)
Facebook is essentially a media company. All of the ad revenue and none of the regulations or responsibilities. Facebook may claim otherwise— but if it looks like a duck …
It's astounding that something like a chronological timeline free of Facebooks outrage algorithms is portrayed as censorship.
And honestly the defense of Facebooks unethical behaviour is following a real simple pattern at this point: Shift blame to the users and make it look like Facebook critics want "more censorship".
Facebook shareholders making a few dollars less (if thats even the case) is absolutely not my problem.
> make it look like Facebook critics want "more censorship".
Unfortunately a lot of them do plainly want more censorship. Worse, they want it for the whole net. Many people always advocated to be careful sharing your life with facebook. They were completely ignored. Now when the users understood they have no control about their own data and the information that gets shared, they want to make the whole net happy with content regulations with the help of the state.
Facebook doesn't care much about ethics, that is true for any companies. But the users certainly had a choice here. Exception might be people in media that also has alsodependent on Facebook in the meantime.
One fairly common pattern seen is that companies develop in a nascent space where there were few rules and were therefore able to basically outrun regulation/ the law that moves very slowly. When that regulation eventually comes it ends up solidifying the monopolistic advantage by essentially creating a moat and closing the door on practices that helped create such growth in the first place. I think when stakes are that high, companies are generally rewarded and incentivized to be unscrupulous rather than virtuous, especially when the unscrupulous actors just become wealthy enough to buy out the virtuous ones.
I wouldn't be surprised if we are currently in the middle of a version of this regarding social media and how privacy of personal information is handled right now.
What would a developed civilization do? I doubt we would be able to prevent the “bubble up and close the door” behavior, so should it also ensure that corporations are regularly rotated (ie dismantled for others to take the space) so only those which can succeed fairly in the current law framework would survive?
This argument is brought up a lot, but it seems the lack of regulation hadn't really stopped FB and Google from monopolizing their markets anyway (or, oligopolizing if we think they're in the same market).
Internet was funny. In the case of Uber, the nascent space was virtual but that was enough for them to ask for the ability to play with the old space with a fresh empty rule set.
Not everyone has the same moral compass. It isn’t even clear that the whistleblower herself was guided by concern for society: she first went to the SEC with these complaints. That’s a weird place to go with concerns about social media’s impact on society. I wonder why? Perhaps…just maybe…it was because the SEC will give her 10-30% of any fines levied against Facebook, leading to a potential windfall of $1 billion or more to her personally.
It takes truly egregious behavior for society to agree that new laws must be passed to outlaw it. The current state of social media says much more about human behavior than Facebook’s behavior. Everyone here would also almost certainly reject the kinds of laws that would be required to make Facebook/IG a healthier place. They would likely involve serious privacy violations, just for starters. So given that legislation in this area has almost no chance of passing, it is unclear what the point of this is, other than a huge payday for the whistleblower.
How about she went to SEC because for certain things that the right place to go to? And the SEC has to fine FB first, which would mean that FB committed illegal acts. That alone is behavior that needs to be encouraged, exposing corporate wrong doing is a net positively for society.
The SEC will make extremely vague allegations that Facebook misled investors by not disclosing some of these reports. Facebook will settle to avoid further reputational damage, paying large fines without admitting wrongdoing. This woman will then buy an island to vacation on and a G650 to get there with. That is how 99% of these things play out.
I personally don’t believe that Facebook has done anything illegal here. That is not to say I don't think they have done anything wrong - their business, like many others, is morally bankrupt in some ways. But there is no codified responsibility for Facebook to do anything to cure the ills of social media. You don’t see casinos being successfully sued for causing suicides, bankruptcies, divorces, financial crimes, etc., but it happens every day. That’s because there is no law against being in a scummy business. Investors in such businesses know (or should know) what they are supporting.
Not that I disagree about the monetary award, but the SEC is one of the federal agencies that has actual teeth these days. Where else would she go, the FTC (ha!)?
It really bums me out to see these sentiments at the top of HN. What to do about "misinformation" is an interesting question for private actors to think about, if they wish, but what the government should do about it is not an interesting question. The debate has been had for a couple hundred years, already. It's over. One side already won.
Enron is another fascinating example, there is an interview with the former CFO where he talks about what he calls "legal fraud" - practices that are highly dubious but technically not illegal [1].
Legal/illegal is only a binary variable in theory. In practice it's things that are clearly white, clearly black and lots of grey in between. There are many concrete examples regarding accounting rules mentioned in the interview.
The question is where does a court draw the line, and as parent rightly points out, sometimes code/case law changes after the fact.
I think the parent is making a point about definitions - that the word fraud can only refer to illegal actions by definition. I think this is an overly literal interpretation of language' though.
Facebook may not be doing anything illegal, but it is immoral. While morality is subjective, and not enforceable, the public needs to know what is happening, so they can make their mind about supporting a given company.
i agree, legislating morality has never worked. However, legislation to inform the consumer has worked.
Social media should be forced to inform the consumer when/how they're being targeted. When a user is shown 15 pieces of content it should be crystal clear the platform is trying to tease out an emotional response from them and not just showing them their friend's posts. Maybe a warning label like "This content was algorithmically curated to elicit the maximum emotional response from you".
"In response, Congress passed the Sherman Antitrust Act in 1890 which subsequently prevented the actions that Standard Oil had used to conslidate the market."
This tech company employee (aka the "Facebook Whistleblower") is refusing to share the documents she stole with the FTC.
Although she did share them with several attorneys general.
It appears she does not support antitrust inquiries. Heavy consolidation of "social media" with no meaningful competition is acceptable to her.
Needless to say, some would argue competition provides incentives for large players to improve their services.
Standard oil reduces the cost of oil, delighted customers, and had already a massive decline in market share by the time the antitrust stuff happened. It was driven by people who couldn't compete.
As someone who has studied this fairly extensively, I believe this comment to be factually correct.
It also didn't hurt Rockefeller in the slightest. To the contrary, he actually became far wealthier post-breakup, possibly because all former business units became more efficient in the light of open competition.
Many of them live on today, such as Texaco, Chevron, and Mobil.
In general, trustbusting almost never actually works as planned, but it always seems like a good idea -- a desperate solution, perhaps the only possible solution -- at the time.
The only thing that tends to work is upstart competition driven by new technology that blindsides the older company.
When it comes to a monopolist, the one thing that we can say historically is that, "This too shall pass."
I was never aware that the purpose of anti-trust legislation was to hurt people (e.g. Rockefeller) or prevent them from making money. I thought it was to promote competition, which it apparently did.
>possibly because all former business units became more efficient in the light of open competition.
Then it sounds like it did work in the eyes of the people who wanted more efficient corporations (and thus potential savings to be passed down to them).
Illegality is dangerous to define and enforce when it comes to speech. Especially if Glenn Greenwald[1] is right that this is not an attempt to weaken facebook, but to commandeer its power to censor.
Directory of Policy Communications for Facebook Lena Pietch just called for "standard rules" for the whole internet, increasing the danger because whomever would get that power could widely censor unwanted popular dissent and counter narrative viewpoints. Facebook already caused damage doing this by censoring the lab leak hypothesis, which delayed potential lifesaving insights.
The whistleblower also has a political connection that should be investigated; The law firm representing the whistleblower also represent Whitehouse Press Secretary Jen Psaki, and the whistleblowers lawyer Andrew P. Bakaj[2] was the principal attorney representing the whistleblower who filed the initial complaint that led to the Trump impeachment process.
This is much more concise reporting on what is happening here and makes much more sense.
I disagree that hiring a lawyer must mean there is a political connection, but the way media frames the issue is again predictable and I personally would frame this as misinformation.
We also have seen no documents. Blindly believing media isn't good advice if the problem is allegedly misinformation.
This is an interesting and somewhat irritating take by Greenwald. I'll just put a counterpoint to one point he made that I was careful to get a grasp of what was going on:
Greenwald [Today]> Their control over multiple huge platforms that they purchased enables them to punish and even destroy competitors, as we saw when Apple, Google and Amazon united to remove Parler from the internet forty-eight hours after leading Democrats demanded that action, right as Parler became the most-downloaded app in the country
Greenwald [Jan 12]> The day after a united Apple and Google acted against Parler, Amazon delivered the fatal blow. The company founded and run by the world’s richest man, Jeff Bezos, used virtually identical language as Apple to inform Parler that its web hosting service (AWS) was terminating Parler’s ability to have AWS host its site: “Because Parler cannot comply with our terms of service and poses a very real risk to public safety, we plan to suspend Parler’s account effective Sunday, January 10th, at 11:59PM PST.” Because Amazon is such a dominant force in web hosting, Parler has thus far not found a hosting service for its platform, which is why it has disappeared not only from app stores and phones but also from the internet.
This is kind of along the right lines but gets the details wrong. Greenwald is intelligent, insightful, has good sources and is not afraid to annoy people, so it is tiresome that he has a bad habit of fudging details to make a shiny claim that is not quite right. With another journalist, I'd think what happened is the journalist was under pressure, so wrote up a story whose details they didn't quite follow, but with Greenwald, I am not sure what is going on. My least unlikely guess so far is that he rewards his sources with sympathetic coverage, which in this case he gave to Parler executives.
Some thoughts:
The big truth: The triple whammy from three of the four biggest tech firms, all of which have sophisticated lobbying operations, did indeed effectively cripple Parler.
The biggest error: None of the three companies were in competition with Parler. In my opinion, the businesses of both Google and Apple would benefit from a strong rival to Facebook and Twitter; Amazon has a close relationship with Twitter, so this is a little less clear, but on the face of it AWS should normally benefit from hosting a client. This was pointed out by many commentators on the Jan 12th post, but he repeats it today.
Continued mythmongering: Greenwald's Jan 12th story essentially misses the backstory on Parler's bad relationship with Amazon, who had for several weeks essentially refused to answer demands that they comply with their TOS. There is competition in the hosting business (Greenwald's characterisation of Amazon's dominance on jan 12th was itself an error), but I don't think any major hosting provider - e.g., neither Digital Ocean nor Hetzner have major lobbying operations - would have kept the left the servers up under the circumstances. I commented on this at the time on the HN thread of Greenwald's story - https://hackertimes.com/item?id=25759115 - to my knowledge Greenwald has not since given a more faithful account of Amazon's role.
Good point. The state doesn't decide what's "good" or "bad". We must not forget that.
Laws are useful but are an application of power. Power of who? Of the people? Of some bureaucrats? The answer will not be black and white in most cases, but what's important is not to regard any body that can legislate as a source of moral authority based on that power alone.
And one consequence is not to buy big corp arguments about evil practices "but it's legal!". Yeah, it's legal because it hasn't been outlawed _yet_, because of strong lobbying... b/c of whatever. Never relegate the moral judgement to just "it's legal/illegal".
Same case with Enron. After the Enron scandal the Sarbanes - Oxley act was enacted to strengthen financial reporting practices along with a bunch of new modifications in the accounting principles and guidelines (GAAP).
The SOX Act was considered to be greatest and the most radical update in Accounting since the invention of the double entry method. Governance and stewardship was theorized in Accounting but they were pushed into practice with the SOX act.
Similarly we had the Dodd-Frank act post the financial crisis of '08.
Essentially law is a post-facto phenomenon. We can not be sure what to write into a legal framework unless there is an evidence for the need to such laws.
Hmm ... I'm personally getting tired of this "well, it's currently legal" .. law changes start with moral indignation. We are at that junction now. Although accurate, let's park the legality lines.
"In response, Congress passed the Sherman Antitrust Act in 1890 which subsequently prevented the actions that Standard Oil had used to conslidate the market."
This tech company employee (aka the "Facebook Whistleblower") is refusing to share the documents she stole with the FTC.
Although she did share them with several state attorneys general.
It appears she does not support antitrust inquiries. Heavy consolidation of "social media" is to her an acceptable status quo.
Needless to say, some would argue competition provides incentives for large players to improve their services.
I think she didn't really blew the whistle on anything yet. Since tech and Washington are close, I would expect access to said document anyway and they aren't available to the public. So blind trust is required, not a good recommendation if we talk about misinformation.
The problem is, very often the law is unable to keep up, especially on technical issues. And even when it does, sometimes is way too late. Gates knew it when he asked to implement the AARD code - yes, they were sued, but they settled out of court and made the competitor's product irrelevant. A lot of Microsoft behavior in the 90s was just this: they could get away with it, but it simply didn't feel right.
I think a lot of people are choosing to ignore that a lot of companies have done things in the past that were not illegal at the time of action. However, those actions were later decided to be made illegal because the behavior was deemed to be antithetical to our values.
Which, on good days, is why we have legislatures. To make new laws to cover new situations.
Law is always lagging behind social norms, for many reasons.
Because of that, I don't think laws can change the world, this is the opposite, laws are merely acknowledging the common rules of the majority.
Technology also change the world and we need time to figure out new rules to adapt, in the case of Facebook and other giants, some changes are clearly in need, at least it seems to be a growing consensus.
Laws can absolutely change the world (at least if we take the world to mean one country). Look at de-segregation. It's obviously that the world was segregated before the de-segregation laws were passed. Of course, the laws were passed because there was ample enough support for them, but that doesn't mean that the laws only aknowledged an existing state of affairs. Even more so, the laws themselves helped accelerate the perception of segregation as evil among the majority of the population, whereas before it was just a regular part of life to many (many on the good part of the segregated world, of course).
If Congress invites you to speak without fear of arrest and the main stream media hold you up as a darling then you are NOT a whistleblower. You are doing the bidding of power... If you are exiled to Russia (Snowden) or locked up without trial like Assange THEN you are whistlw blower and dangerous to power. This chick is a shill
Bingo. The whole thing smells fishy. Of course, the headlines were talking about how evil Facebook is because it makes kids feel bad. But digging deeper, you can see exactly how this is meant to play out: More censorship and control by the NGOs and "Fact-Checkers" (since the gov't can't publicly dictate what content Facebook is censoring). But make no mistake, the same players that are calling the shots in DC will now be ensuring the public doesn't see any of that dangerous "disinformation".
It was a good 30 years or so for true free speech that was always the promise of the internet. But those days are gone.
absolutely. To me it sounds like she is exacting revenge on FB for something or may be like the other commenters suggested she is trying to get that whistleblower share of a SEC fine/settlement with FB if one is to come out of this story. Otherwise it is kind of strange turn of heart on her part from profit off the users to suddenly being so caring about the users - I mean she herself founded what is now Hinge dating app https://foundation.mozilla.org/en/privacynotincluded/hinge/ :
"And like so many dating apps, Hinge asks users to connect their Facebook account to sign in to the app. Remember, when you connect a social media account like Facebook to a dating app, both Facebook and the dating app now potentially collect more information together."
"Hinge definitely shares user data with around 45 other Match Group companies, such as Tinder, OK Cupid, and Plenty of Fish among others. The company also shares data with third parties for purposes such as advertising and analytics."
Industrial ethics can be about ensuring longterm success as well. If a journalist can’t be trusted to keep an off-the-record source unnamed then they will never get another story. They can tarnish their entire organization, so they don’t do it even if they could get one incredibly powerful story out of it.
Your point is a good one, but we should be careful not to equate [edit: completely] Facebook's actions with those of Standard Oil.
If we say that the Sherman Antitrust Act was only necessary because of unethical behavior on the part of players like Standard Oil, we cannot say the same in this situation.
If you consider Facebook's behavior unethical, how do you view the behavior of the millions of people who fund them and provide them such market power? There are many many alternatives to Facebook. But non-Facebook parties routinely force people to Facebook if they want to be involved in an event or receive a notification or provide feedback.
If you would shame Facebook for their behavior, you should also shame others for using Facebook. Users enable Facebook's behavior.
Conversely, if you hold Facebook users harmless, it is harder to sympathize with complaints about Facebook's behavior.
Their are clear parallels between Facebook and Standard Oil. But it is useful to note where there are differences, too.
Standard Oil had gone from >90% market share to < 60% by the time any legislation related to it was enacted. The legislation hurt their competitors, who were catching up using similar business strategies, more than SO.
Agreed. Facebook has a fiduciary duty to it's shareholders to maximize profits. Unfortunately, it's up to the government to create legal boundaries to prohibit the means by which they can do that.
Frankly, I'm not expecting any meaningful legislative response, given US antitrust has blessed the WhatsApp and Instagram acquisitions by Fb as well as Google's acquisition of DoubleClick and YouTube.
If their internal docs show that they know their product causes harm and is engineered to be addictive, it could be a big tobacco moment from a mental health perspective.
> a lot of companies have done things in the past that were not illegal at the time of action. However, those actions were later decided to be made illegal because the behavior was deemed to be antithetical to our values.
What you are saying is literally the opposite of hundreds of years of the rule of law.
If FB did break the law as written, then prosecute them for that in a fair trial by a jury of their peers, but yours (or anyone else's) personal feelings about "our values" should never be able to override the plain language of the law, especially retroactively.
The point they’re making is you don’t prosecute Facebook for something that we think is unjust but is not yet illegal. You make it illegal, and then after that point if anyone continues with the now illegal course of action then you prosecute them. Prosecuting someone for something that wasn’t illegal when they did it would of course be wrong.
> I think a lot of people are choosing to ignore that a lot of companies have done things in the past that were not illegal at the time of action. (...)
The definition of what represents good and evil does not come from what's passed as legislation, nor does the negative influence on society as a whole of a business.
Legislation is also a moot point given that these mega-corporations actively lobby law-makers into not passing any inconvenient legislation.
>There should be no question, that what FB is doing here, while not illegal, is highly dubious ethically.
Why, what exactly are they doing that is ethically dubious? So far based on what I have read of this whistleblower's revelations, I do not have a problem with Facebook doing any of it.
Really? You don’t have a problem with an app that causes 1% of teens that use it to develop suicidal thoughts? By the way, according to the leaked study these teens directly attributed their suicidal ideation to Instagram.
Yes, I do not have any problem with it whatsoever. There are probably plenty of books that also cause some percentage of people to develop suicidal thoughts, but I do not want to start banning those books.
This is the kicker I think. Facebook scales 'keeping up with the Jones' up and make it easier. But that's been a common trope since (google search... 1920ish). What Facebook's doing isn't new; it's simply Easier.
When you say, '1% develop suicidal thoughts' - Is that causation or correlation? Maybe I'm missing something; but this seems somewhat like 'biggest target' to me as the world had shrunk.
There should be some laws about using addictive patterns imo. I'm sure that's fine and profitable and coca cola would continue to like putting cocaine into their drinks to make their customers want it all the more, but we have laws preventing that behavior in the meatspace and therefore we can have laws preventing this sort of evil behavior with technology companies too. Tie it into website accessibility laws that are already codified in law and can be used to sue certain companies today.
In a sense, companies are right now incentived to develop the most effective 'digital crack', because anything that hijacks the reward pathway of the brain more effectively leads to more profit. It'll be quite interesting to see how the public discourse around this will progress, since digital entertainment isn't as easy to publicly mark as 'bad' as drugs were.
On the other hand, China is sending quite clear signals that it's theoretically possible to legislate against e.g. video games -- though only after you've already established an intrusive 'social credit' system, which I hope we won't see in the west any time soon.
And I would like to have a word with them. They've been given free reign to turn our kids into absolute digital junkies (this is coming from a self-diagnosed sometimes-addict who realizes these kids are on another level), deliberately dangling carrots that reward 24/7 engagement in the activity.
> digital entertainment isn't as easy to publicly mark as 'bad' as drugs were
Definitely true. We need a way to differentiate between Super Mario Brothers and Mega Crack Force Gacha Legends Online.
Completely true, I think the mobile sector is the worst offender. That said, people can perfectly spend insane amounts of time on titles that doesn't employ these mechanisms. But the industry got far worse in more recent years.
I agree completely. I've had the chance to slowly grow up with video games and witness their evolution, and still it's really hard to withdraw myself from the allure of 'just one more round of Apex Legends' and the like. And why should I try so hard, they are great games, after all!
It's decades of development in 'addictiveness tuning' unleashed upon the brain as a stationary target... I really feel sorry for the kids that never knew anything else.
Digital crack is a perfect way to describe this. I'm sure someone clever enough can write some great legislation for this. The issue is that so many industries are beholden to relying on digital crack. You might get one senator who wants this, then 99 others who are getting flooded with calls from every major employer in their district telling them to vote no. I wish we had stronger government that wasn't so susceptible to having anything good for the public exploited to make a few people very wealthy. Then again we've never had this sort of public first government in the history of our nation, its sort of always been like this out of design whenever I learn more about our history.
I disagree - I think that it should be completely legal to sell cocaine drinks as long as you inform the customers that the drinks have cocaine in them and I think that is should be legal to use even the most psychologically manipulative marketing techniques imaginable. I would rather that it be the responsibility of consumers to avoid getting addicted than to use government power to ban things. Similarly, for example I think that it should be legal to sell skateboards even though people sometimes injure themselves while riding them.
While this is extremely murky and maybe impossible to pin down from a legal standpoint, I do like the thought. It's not just Facebook and it's not just social media. It's any software (online games?) that clearly goes out of its way to induce addictive behavior as their business model.
> There should be some laws about using addictive patterns
Of course there should be.
But then you would also need to ban casinos, sports gambling, gaming, porn, cigarettes, alcohol and the myriad of other things that are addictive in nature.
Well we do have laws regulating and/or taxing most of those addictive things already. Except for social media and gaming really, although gaming is under hot water currently due to loot box gambling mechanics.
I personally find the personalized advertising great. A lot of the time I am shown things that are actually useful/valuable to me.
I think a lot of the value really depends on the individual. If you're engaging in productive activities like hobbies, you get valuable targeted ads. If you're engaging in activities that are low value like signaling to others in myriad ways, you probably get adds for things like disposable fashion.
Personalized ads are basically a mirror. They feed what the person already wants to engage in. If you want less of the bad types of advertising, then you need to start at the root which is getting people to stop being interested in activities and behaviors that are lower value.
> I personally find the personalized advertising great. A lot of the time I am shown things that are actually useful/valuable to me.
There are plenty of ways to deliver this value without secretly fingerprinting every user and delivering targeted ads at every corner. A search where you profile yourself, for instance; similar to how you provide search filters on Amazon.
All ads are fundamentally ugly in the sense that their effect is the opposite of a great work of art or entertainment. Ads are fundamentally just some pathetic person's selfish attempt to control what other people to think and feel in order to increase their own power through financial profit. In a sane world they would all be banned. Ads exist in their current deranged and disgusting form because contemporary humans have been selectively bred through social engineering to be submissive, cowardly, selfish, and stupid. Personalized/targeted advertising is not something that needs to be discussed.
Ads have been around for more than 2000 years now - would need a massive shift in mores to get rid of them (they survive in a lot of very different societies).
I listen to a podcast on football. Are they allowed to run ads that are about sports betting and NFL tickets? That is personalized to the group. Is Facebook allowed to run ads for sports betting to all people who are fans of a professional team on their site?
Is Facebook not allowed to run me ads for local restaurants any more?
I’m guessing that people use targeted advertising and personalized advertising interchangeably. The advertising industry knows full well what it means, and I’m sure the legislator should have no problem finding experts in that area to make a legally rigorous definition.
I knew something was seriously wrong the moment I saw a legitimate business (EBay) selling eye-ball space (ads) on their property that was supposedly profitable through legitimate business (hosting a marketplace, taking a cut, etc).
Ads create a negative and detrimental feedback loop by incentivizing dark patterns and other negative gamification in order to squeeze out previously non-existant eyeball time from your product. E.g. the optimal path for say EBay is to have a user come on, find what they want, browse a bit through interesting things and recommendations, buy what they want/need, then log off. Instead, ads have incentivized spam listings which do two things: More eyeball time and thus ad-impressions/clicks. And they've cause the creation of non-optimal experiences by allowing non-optimal players to exist through pure randomness. I.e. In an ideal market, it should be "winner takes all" for any unique genre or field or product space, one which should be exploring. Instead, the spam listings make it so a non-negligible amount of useless and bottom of the barrel products/sellers/companies to exist and thrive.
For FB, ads have commoditized eyeball time even more directly than the indirect example I gave above with EBay. A potential product path with FB should be people using it as a platform to interact with people they know, organize events, and to have a shared space to communicate and discuss ideas.
I don’t personally use FB and don’t have a very favorable view of their products.
I would, however, like to see the same level scrutiny applied to the general American media, especially the news media. FB has come in and started eating their lunch. There’s a deeper problem here, and it feels more like the powers that he want to take down FB.
I’m not denying any allegation made against FB, but why is it that Fox News, CNN, MSNBC, and Hollywood get a pass when they’ve been damaging America for much longer than FB has existed?
If the government sees a problem and wants to get involved - great. But let’s hold an equal bar.
While twitter and the corporate media are largely absent from the same scrutiny, as you note. They are just as responsible for eroding democratic norms, eroding trust and whipping up the public into a frothing, misinformed rage.
Now this culminates with an impeccably credentialed, rich, Harvard elite `whistleblower` who is largely repeating what we've long known? And who just started a twitter account and web site while on a whirlwind tour. And who is roundly supported by the corporate media and government. This doesn't strike me as speaking truth to power. It's the opposite.
Contrast with Snowden, who came out with new revelations and knowingly took on a great amount of risk to do so. There was no anti-NSA media surge leading up to his appearance. He was not embraced by the ruling class. Rather he infuriated them. That is speaking truth to power.
What we are seeing is the ruling class desperately trying to reassert control over the flow of information. Using facebook as a mechanism to take back control is smart, clearly. Both sides of the aisle want to regulate them. Mark is not a sympathetic character. People want to see a powerful corporation and CEO finally get more than a slap on the wrist. We thirst for a hanging. It appears they are going to give it to us, but I fear we will regret it in the end. As the response will reach far beyond the walls of facebook.
To understand the dynamic we are seeing play out, the book `Revolt of the Public` is worth reading.
“All over the world, elite institutions from governments to media to academia are losing their authority and monopoly control of information to dynamic amateurs and the broader public. This book, until now only in samizdat (and Kindle) form, has been my #1 handout for the last several years to anyone seeking to understand this unfolding shift in power from hierarchies to networks in the age of the Internet.” --Marc Andreessen
This past year, I think immediately back to believing and repeating the grotesque lie that Brian Sicknick had his brains savagely bashed in with a fire extinguisher by Jan 6 rioters. That never happened, but it was major news for a few days. I actually feel a bit violated for believing and repeating something so false.
On the other hand, I haven't had a FB account of any type for over five years, so it hasn't directly affected my personal life.
That feeling of being violated by false / incomplete information and narratives drives me mad. I've spent the last 5 years trying to manage my digital hygiene, not oversaturate myself with news notifications, etc. I look at things with a more critical eye, always hunting for bias. But it's still inescapable, especially with the force with which some of these narratives are pushed.
The elephant in the room here is the First Amendment. The government, legally, cannot do anything to censor "hate speech" or "misinformation" on CNN or Fox News. They also can't do anything to censor "hate speech" or "misinformation" on Facebook, but they can certainly harass Facebook into doing it for them.
Not completely true. Historically mass media was regulated. Lying also isn't legally protected if it causes injury (fire in a crowded theater being the canonical example, libel another).
“Fire in a crowded theater” comes from the Supreme Court upholding the criminal conviction of a man handing out pamphlets protesting the WW1 draft. Not exactly the precedent I’d reach for if I was trying to argue that governmental regulation of speech is not dangerous.
Facebook Groups have been host to many organized acts of violence. The Kenosha shooting comes to mind as a recent example where the Group was reported to Facebook and made it through multiple layers of review before being allowed to persist. To say that such speech is protected under the First Amendment is too much of a stretch.
It hasn’t even been legally established that Kyle Rittenhouse did anything wrong. Legally, he is innocent until proven guilty, and he has a case for self-defense for every round he fired that night.
To say that the First Amendment doesn’t protect the rights of people to organize and assemble in a public place where one of them is later forced to defend themselves is absurd. People don’t use Facebook to plan premeditated acts of political violence. They use Signal.
Even if you were right that killing two people isn't "anything wrong", it's beside the point. This is about Facebook's role in the violence. The event was organized on Facebook.
Please point out where I argued that that was protected speech. I think you’ll find that you’ve made up my argument out of whole cloth, which is quite tiresome.
The only thing I can say here that’s within the rules is this: this is completely unrelated to what I actually said. I’m not sure where this conversation you’re paraphrasing happened, but I wasn’t part of it.
The Supreme Court only upheld the fairness doctrine in situations where spectrum was limited (Red Lion Broadcasting vs FCC). Even at the time the fairness doctrine wasn't really applicable to mediums in which bandwidth/spectrum is not limited and not licensed to specific broadcasters.
It's not clear to me how this kind of regulation would be justified to the Supreme Court in regards to the Internet.
well, the airwaves (which is where the FCC fairness doctrine was applied to) are a limited resource which, without regulation, become quickly polluted by bad actors.
print and cable are somewhat more immune to that particular issue.
> fire in a crowded theater being the canonical example
That phrase comes from Schenck v. United States, where the Supreme Court ruled that the federal government could imprison an anti-war activist for disseminating pamphlets.
Breaking up Facebook would probably be the way to go; it has less troubling side effects than trying to regulate “hate speech” via a clever legal trick.
It’s my external opinion that the real issue is the combination of the ad sales platform with the social media. If Instagram wasn’t owned by a company with an ad sales program, they would not have the capability to do what Facebook does today.
And this is exactly what they're doing. In effect they're loopholing the 1st amendment. Bring in Zuckerberg, hound him about removing specific "disinformation" under the threat of future political action if they don't comply. Then they get to throw the private company bs flag whenever someone says it's stomping on the 1st amendment.
It looks like the problem here is that CNN, MSNBC, Hollywood, Fox News, etc. are in general controlled by the either the same people or people with the same interests as top political elites. For Facebook, while they are trying really really hard to be the good boys, it's not the case. While they doing a lot of effort to "suppress misinformation" (i.e. prevent any heretical thought from being voiced) they are, by nature, less efficient in this than centralized legacy media. It's not possible to be a heretic on CNN. It's still possible - though increasingly hard - to be one on Facebook. This is going to be fixed, and what you're witnessing is the process of implementing this fix.
Facebook gets to say "its the users being polarizing"; where traditional media gets to enjoy libel and slander laws.
If Facebook's editorial decisions rendered them subject to those laws, would that be equal enough?
There are answers to this question that do not easily summarize as "we need more rules about who gets to say what". We gots plenty of rules. Let's apply them fairly and equally for once.
Opinion: Facebook is eating their lunch because humans love being fed belief-affirming drivel for dopamine. It's easy to churn out this content when you have no integrity or regard for the truth.
Tinfoil hat: Bad actors are freaking out because their greatest mis/disinformation tools (Facebook, Twitter, etc) are about to be regulated. The jig is up.
NBC doesn't watch you constantly, know when you're feeling anxious, then target your 14 year-old brain with ads at your most vulnerable moments. There is no comparison here.
The press and media are an old and mature ecosystem that has legal framework around it . These new companies are using free content, and sometimes free moderation but still act like the press. There's something unsustainable about that and sooner or later the society and the law would have to deal with it.
There's something particularly unethical about companies that hide behind "User generated content" and "external fact-checkers".
I note that the "general American media" is subject to quite a few more constraints than Facebook and others of similar ilk.
At the lowest level, the "general American media" can have advertising pulled if they get too far out of hand. Going further, they have to live with their editorial decisions---see, for example, your own antipathy to them. Ultimately, they can be sued.
Facebook, on the other hand, can offer not to associate someone's advertising with that content, but someone else will surely be happy to fill the spot and FB will be making money on both. Facebook doesn't have to live with its (non-) editorial decisions---that's user generated content, right? And you can't really sue Facebook either.
The bar has never been equal, but not in the way you think.
Will users eventually be more empowered against the publications? All of these publications try to play against human biases to receive attention and sell ads. There are psychological tricks. By now we should have tools that distill whose interests are upheld by an article (liberals, cocacola, war or peace, workers, some researcher/university which is selling their latest drug). We’ll be able to “pagerank” who is upholding who.
One, most of the mainstream news has at least basic editorial processes in place that do a rudimentary check on truthiness. Facebook, along with some of the sketchier news outlets, are the opposite: they profit off (and optimize for) disinformation.
Two, it's a matter of scale. Facebook reaches a far larger audience and has far more detail into their preferences and can nano-target personalized stories for them and corral them into groups, creating perfect echo chambers. The news companies are way too small and in some way irrelevant. Even if they all went bankrupt overnight, Facebook's algorithms will keep working, keep producing personalized truth bubbles. Is fake news a problem in general? Sure. But the news companies are tiny compared to Facebook, and not the immediate and persistent threat to democracy that Facebook is, just because they're much smaller. Even if you take the entirety of local news networks as a whole, they don't have the same saturation and engagement feedback loops that Facebook has.
The Powers That Be are typically reactionary forces, and has long battled the news industry over free speech and censorship etc. Facebook is a relatively new villain, different than the old ones, and way more powerful. Further regulating the news industry won't really deal the Facebook issue since they can keep on aggregating from anywhere and everywhere. It's a different beast altogether.
I agree with holding all media accountable. But the problem with what you say is that for millions and millions 'news' is actually BS opinion shows. On both sides but from my perspective Fox & Murdoch's empire abroad has done far more harm than say Maddow preaching whatever riles that audience up.
The actual press, WaPo, NyTimes (which does great docs, so does Vice imho) have rigid editorial process. Yet I still on HN people say NyTimes is liberal which I don't belive either.
Also I want to point out Vice gave a known murderer a national interview for him to push his fake narrative. Meanwhile there was an actual video and pictures of the killing that completely blew away any notion this was anything other that a murder. Michael Reinoehl hide behind a wall, came out behind two people, and shot one in cold blood. The two people were simply walking. The video and pictures of the killing were available online before Vice decided to give him an interview.
Vice are an absolute shower of bastards, I don’t read anything there after they basically outed SexyCyborg even though she begged them not to, and then took her patreon down cutting off a lot of her funding… [0]
New York Times is listed as center-left and high factual though. Orbiting the center and factual is really about as good as you’re going to get for less biased reporting. (Ignoring how even that is reductive, and that the NYT definitely has its own set of biases that don’t neatly fit on the left-right spectrum.)
The issue with the NYT is not around factual reporting but that they are shifting away from fact based reporting to opinions that are being presented as facts.
I read an article about a voting bill that had passed in some Republican state. The whole article was about how this is voter suppression, how Republicans don't like minorities, what politicians on Twitter had to say. In the whole article there was only ONE brief sentence that mentioned what was IN the bill.
Yes I agree this is misleading especially online the small 'opinion' sub heading is really small like on Apple News. All the majors are guilty of this.
"Now, I know there are some polls out there saying [George W. Bush] has a 32 percent approval rating. But guys like us, we don't pay attention to the polls. We know that polls are just a collection of statistics that reflect what people are thinking in reality. And reality has a well-known liberal bias ... Sir, pay no attention to the people who say the glass is half empty, [...] because 32 percent means it's two-thirds empty. There's still some liquid in that glass, is my point. But I wouldn't drink it. The last third is usually backwash."
(And what is the Vice article an example of? Presenting both sides of an issue?)
If you don't believe NYT is liberal, you're just refusing to recognize the reality. You may love NYT, you may agree with what they say 100 times out of 100, which is fine, but it doesn't change the fact they are a liberal media outlet and have been since forever. If you ignore that, you're just voluntarily skewing your picture of reality. Maybe you think "liberal" means "bad" (which would be weird as I suspect your opinions are pretty liberal too) - or maybe you think it means "good" - but in fact it just means they follow certain ideology, and it's a very easily observable fact. Whether it's good, or bad, it's for the reader to decide, but whatever you position on this it it does not change the fact that the ideology exists and is followed.
It's opinion, from 2004, and mostly references 'big city' centric coverage. Call out to anti-gay marriage lol ;)
Which I guess I understand. If I tried to view from the perspective that I lived in Missouri, reading fashion reports like that piece illustrates would seem foreign to my world view and maybe scary like it's pushing some comical 'fly over state' vs the elites worldview.
But I don't know if I would concede that's liberal it's not my issue, like I know it's outdated but you can draw a line from gay rights as mentioned to trans rights today. I hate that some view this as 'political' to score points endangering people, versus I see it as basic human rights.
But thank you I do see more this viewpoint that culture has become very political today and that does help explain the strong reaction from the right.
First of all, I don't see why opinion pieces should be excluded. If a newspaper only publishes liberal opinions, it's a liberal newspaper. And we all know one can be fired from NYT for allowing a non-liberal opinion to be published.
Second, just looking at today's politics page, we can see criticizing McConnell, promoting Biden's "infrastructure" spending, criticizing Kirsten Sinema, lamenting Democratic loss of votes, attacking Arizona for removing mask mandates, pushing the angle that parents objecting to teaching CRT are "domestic terrorists", reporting Democrat talking points on budget issue while not even bothering with representing the other side. This would be very weird for a balanced media, but completely as expected for a liberal one. To be sure, I'd of course except the same behavior (with a different accent, of course) from a conservative newspaper - criticizing Biden and not McConnell, opposing Biden bills and not supporting them, etc. etc. The point is there's clear ideological position which can be identified very easily. If you take a random article from NYT and show it to somebody without telling where it's from and ask if it's from a liberal or conservative newspaper, I'm sure you get very high percentage of correct answers. It's rather obvious - and if you look it up, NYT people aren't exactly hiding it.
And that happens every day, day to day. Of course, compared to some antifa twitter, they're probably "caged" and "flat". They are a respectable, mainstream, run-of-the-mill liberal newspaper.
> The actual press, WaPo, NyTimes (which does great docs, so does Vice imho) have rigid editorial process. Yet I still on HN people say NyTimes is liberal which I don't belive either.
The New York times and WaPo? You mean the ones that consistently do things like make up false allegations about children (Nick Sandmann... the kid who smiled) and take much more time to scrutinize the later proof of their wrong doing than they ever did for their initial reporting? This happens constantly. It's not a one time event. It's almost like they're pushing an agenda.
The 'right wing' bias of the NYT tends to be around being Warhawks, but the right wing isn't necessarily the sole wing of war hawks in this country. The last republican president was quite anti-war.
> The actual press, WaPo, NyTimes (which does great docs, so does Vice imho) have rigid editorial process.
That hasn't stopped them from publishing complete bullshit to start wars and similarly abominable shit. The New York Times allowed Judith Miller to uncritically publish flagrant government lies to start the Second Iraq War. In the leadup to the First Iraq War, ABC and NBC published atrocity propaganda (the "Nayirah testimony") for the American government. In the 20th century, the New York Times let Walter Duranty publish Stalinist propaganda denying genocidal oppression and famine in Ukraine. In the late 19th century, a bunch of American newspapers were used to start a bullshit war with Spain.
Facebook further blurs the difference between objective analysis, expert opinion, and uninformed nonsense. And, yes, like the talk shows, it profits off this amplification of nonsense. Algorithmic engagement is the natural evolution of "if it bleeds, it leads".
Facebook just happens to be much bigger and much better at it than the legacy news companies.
1. The mainstream news pushed numerous stories over the past years without those rudimentary truthiness checks.
2. I disagree with many points in that paragraph:
- echo chambers: when was the last time you heard a nuanced "from the other aisle" opinion from a NYT reader; an opinion that wasn't covered in the NYT. Same echo chamber, just a different format.
- news companies are small and irrelevant: Seems like this narrative has captured everybody's imagination today. Everybody is talking about it. It might even lead to action in Congress. Are news companies really that small and irrelevant as you claim them to be?
I read the times almost everyday and have plenty of conservative opinions. The Times has multiple conservative columnists and consistently publishes conservative op-eds.
These 'conservative' columnists do not reflect the majority of conservative opinion in this country though. NYT columnists tend to be members of the political or cultural elite. There is a large strain of anti-elite sentiment running through the country right now, especially on the right. These people do not feel their views are expressed in the NYT. For that matter, I know plenty of working-class dems who feel disenfranchised with the NYT, the media elite, the democratic party, etc.
Conservatives at NYT do not reflect the opinions of most conservatives or most conservative media.
If you want a taste of conservative media, look at what is collated on RealClearPolitics. NYT is very different, you won't see Ross Douthat saying the same things they say in The Federalist.
I’d heavily recommend Matt Taibbi’s Hate Inc to learn more about how news media lies, how it addicts people to its consumer product, and ultimately how high bar publications have been long dead. From the book;
“The public largely misunderstands the “fake news” issue. Newspapers rarely fib outright. Most “lies” are errors of omission or emphasis. There are no Fox stories saying blue states have lower divorce rates, nor are there MSNBC stories exploring the fact that many pro-choice Democrats, particularly religious ones, struggle with a schism between their moral and political beliefs on abortion.”
> One, most of the mainstream news has at least basic editorial processes in place that do a rudimentary check on truthiness.
Come on, man. There are just so many counter examples, from the "paper of record" NYT (1619 Project) to MSNBC (Russia nonsense) to Fox News (literally everything) to Rolling Stone (A Rape on Campus) I could sit here all day listing outright lies and untruths published in the pages of mainstream media outlets in order to push an agenda.
Agreed. I'll just add, specifically the media's agenda is making money, and just like FB it affects their ethical obligation to society.
The surge in click-bait and outrageous lies is eroding one of the pillars of freedom: the freedom of the press. Now, the press is largely viewed as untrustworthy by 60% of the US population.
Both are true. Both the news and social media optimize for clickbait, which bad actors use to slip in misinformation. What's funny is, people use the exact language about the news. They'll say that CNN is just being used by the far left to promote far-left views, or that Fox News is being used by the far-right to promote far-right views. Yet when the medium is Facebook instead of the media, suddenly it's a problem that must be solved with more laws.
Facebook and other internet media have special Section 230 exemptions from libel and other things that print media and media using public airwaves do not.
How about we just remove that so they are all equal? Not special or "more laws," just equal.
I agree, and that's why I said "rudimentary". Still, as content producers and not algorithmic aggregators, both their ability and success at amplifying disinformation is much, much less than Facebook's. Even Breitbart or DailyKOS's impacts -- as outlets who often and purposefully distort the truth -- are not even rounding errors to Facebook's sheer scale.
1619 Project? "The 1619 Project is an ongoing initiative from The New York Times Magazine that began in August 2019, the 400th anniversary of the beginning of American slavery. It aims to reframe the country’s history by placing the consequences of slavery and the contributions of black Americans at the very center of our national narrative"?
>"basic editorial processes in place that do a rudimentary check on truthiness"
I think this is merely an illusion. The beauty of English is that you can spin a story a dozen different ways while still presenting 'true' facts/details about an event. (Edit: "Fiery, but mostly peaceful" is a quintessential example of this.) You can also shape how strongly the public reacts to something by how much you decide to cover the story. I guarantee you that if the Fall of Kabul happened under Donald Trump's administration we'd still be getting daily news stories about the fallout.
Furthermore, the press has done an excellent job of branding itself as an impartial arbiter of truth, whereas in reality they're just another business run by people with their own motivations.
The vast majority of what you could call disinformation is directly downstream of mainstream media sources. FB is a platform: the content has to come from somewhere, and it’s usually downstream of these news companies. Should FB be banning Fox and MSNBC? What’s the expectation here?
I was just listening to hannity the other day and the man was constantly talking about how you should talk to your doctor about vaccines because it's the best way to prevent covid. Listening to Cavuto the other afternoon, and heard a similar sentiment on his show. Who are you listening to exactly? The majority of anchors, especially the biggest ones, seem to basically endorse the vaccine.
Did you miss the part where they lie pretty much all the time nowadays to advance political agendas? The argument being made by this "whistleblower" is merely that FB should be required to do the same. The only "whistle" she's really "blowing" here is that of the DNC. I'm sure Zuck is smart enough to see how detrimental this is to his company's long term success, or even existence. Founders and executives: background check your future employees and screen them for activist backgrounds. If they have any of that - do not hire. They themselves say political affiliation is not a protected category. They are dangerous for business. Let them taste their own medicine.
FB is much worse because there's no sense of moderation.
In mainstream media, there's someone who vets content, fact checkers etc. You can disagree on whether you think Tucker Carlson is "factual" but effort goes into not running afoul of legal rules and fear of being sued. Scripts are written, it's edited, etc.
With social media, it's just straight up lies. Not even "a little bias" but just straight up garbage. They hide behind "free speech" and "light moderation" but the reality is that there's no rules like traditional media have to take into account.
> when they’ve been damaging America for much longer than FB has existed
Has modern mainstream media been implicated in major events like the Jan 6th insurrection?
I highly recommend watching the Senate hearing from today, and I'm a person who normally can't stand these things. This was totally different from any other hearing I've seen -- little grandstanding, no partisan bickering, no evading of questions. Most of the Senators seemed genuinely interested in what Frances had to say, and she gave meaningful insights backed by real data.
A lot of what she's saying is stuff that has been generally known in the tech industry for some time -- that algorithmic content ranking amplifies division and outrage. But the detail she gave about actual research quantifying it goes way deeper than I think most of us were aware of.
Strongly agree. She is doing this in a way that no one has really done (certainly not at this level of skill) before: explaining the systemic issues, giving clear and direct answers, keeping the conflict away from the personal, and most of all delivering criticism with empathy and compassion for all involved.
Just wanted to thank you for the comment. You weren't kidding; everything she says seems data driven. I especially liked her observation (around ~1h) that FB can determine the real age of users by "working backwards" and doing cohort analyses.
I don't care about Facebook, and am not interested in using it, but after reading the spiral of consequences Facebook has been in, it made me think:
1. To all the governments that want to tighten more control on communication, this is great kindling to show people "we the government should further control tech for your own good".
2. You can hardly get the US gov to agree on anything, but when it's about hating each other as much in the digital world as in the physical world via a common source, everyone's at attention.
3. Facebook is so ill-equipped to handle most of the issues that were brought forth. The expectation that the gov/people places on Facebook =/= reality of what Facebook can deliver. Facebook is running around frantically trying to manage an existing mess of a switchboard, they are not going to pull a miracle.
> 1. To all the governments that want to tighten more control on communication, this is great kindling to show people "we the government should further control tech for your own good".
I'm genuinely shocked that the popular sentiment on HN leans toward more government intervention and control of internet communications. So many comments here are calling for more laws and regulation, but few people can even begin to elaborate what they want those laws to do.
If laws are passed, they won't be targeted at a specific company, nor will they be limited to specific bad actors on Facebook. There is no magic law that makes all of the bad parts of the internet disappear without also having some chilling effects on the part of the internet that you actually like. If anything, large incumbents like Facebook tend to come out ahead of the smaller companies when onerous regulations are put in place.
>> I'm genuinely shocked that the popular sentiment on HN leans toward more government intervention and control of internet communications
I'm not seeing that as the popular sentiment here. I think we just want FB to stop algorithmically boosting stuff in order to boost engagement numbers and to also start applying the rules equally.
> 3. Facebook is so ill-equipped to handle most of the issues that were brought forth. The expectation that the gov/people places on Facebook =/= reality of what Facebook can deliver. Facebook is running around frantically trying to manage an existing mess of a switchboard, they are not going to pull a miracle.
Do we expect a miracle? Frankly, a modicum of decency would be a huge step forward...
1. Yes and no. Look at how the world's autocrats are using facebook: As a propaganda machine with their troll army, they kick it out of their country or shut it down when convenient and facebook is OK with it (e.g. on Turkey). FB has no spine. So some regulation would actually hold the governments accountable for their abuses.
2. Thats because facebook transgresses common decency. People dont argue politics with base insticts in a parliament.
3. The solution is not to dump their externalities to the public forever. The whole idea of "social media" as mob media is ill-thought.
I don't think it's an exaggeration to say that all successful online sites are damaging to mental health, and have destroyed our collective attention spans, if nothing else. What sets FB apart is simply that it holds the top spot.
Cut off FB's dopamine dispenser, and people will just rush to the next one over. And so on and so forth.
Even Hacker News... I refresh front page several times a day, looking for another hour I can throw away in my short life, making comments like this that I will then check every hour to see how many upvotes I got from total strangers.
Our brain was not made for the current world we live in. And companies are ruthless in exploiting it. We were made to feel this dopamine of approval from our few tribe members, who we would spend all our lives with until we would die in battle, or hunting or from some disease. We live in such a farfetched environment, most people don't realise how much this affects us.
We're striving for the same things that in a hunter-gatherer context would make us happy. Now they don't. We are not meant to be surrounded by strangers, we're not meant to have approval dopamine hits from them and from thousands of them. Our brain must even have trouble understanding what that is. We're not meant to go to some magical device to find a partner to spend the rest of the life with while in the process discarding other strangers, (...)
The title here doesn't match the article headline. The documents make Facebook look bad but so far I haven't seen anything incriminating, in the sense of violating a US Federal criminal statute. Are there credible allegations of something like securities fraud or wire fraud here?
Note that there is no generally no law against disseminating incorrect or "harmful" information.
The section at the end -- "Haugen contacted state officials and the SEC" talks about the parts of the whistleblower documents that show that Facebook was misleading investors. That is criminal under current US law.
I think people are more mad about the other things in the documents--most people are more sympathetic to harm done to kids' mental health than to investors' bottom line, but lying to investors is the one that's illegal.
The fact that it's possible to question whether or not any illegal behavior has been revealed doesn't mean that Facebook didn't cause – and is not causing – any harm doesn't mean there's no basis to Haugen's revelations.
Rather, it means that under the current regulatory regime, there are no penalties for behavior harmful to consumers. That the only recourse is to file with the SEC alleging harm to the stockholders is simply a reflection of the fact that the mantra of "shareholder value" has become the only law that corporations have to follow.
Yeah this appears to just be some embarrassing things but nothing outright illegal or really even surprising. What company doesn't have embarrassing secrets they'd rather not be leaked? None that I've worked at anyway, and none of those were evil.
Which says more about the emasculated regulatory regime in the US than anything else. There are no penalties, or none of any real meaning, for causing to harm to consumers or the community, but violating the holy tenet of maximizing shareholder value is almost a cardinal sin.
If you don’t like Facebook just don’t use Facebook, WhatsApp, or Instagram. Don’t know why so many people here think parents and adults are not capable of regulating themselves and their kids.
It’s honestly condescending to think you know better than your fellow citizens if you’re arguing for government intervention.
If you don't like Oxycontin, just don't use Oxycontin, Dilaudid, or Fentanyl. Don't know why so many people here think adults are not capable of regulating themselves.
And yet, regulating off-label usage of opioids doesn't seem to be very effective at reducing addiction either. So perhaps this analogy doesn't work, and we should acknowledge that using social media and becoming addicted to opioids are substantively different?
The addiction process has a mental component that in some people is stronger than the physical addiction. This seems more a thread on addiction and bad habits vs aligning the technical specs of each issue.
There are many countries around the world that don't seem to have the same issues with opioids so something must be different. What is it apart from regulation?
Social media dependency is a real thing, with serious medical side effects, which can include Depression, which in severe cases can actually be fatal.
But, you know, prescription medicine dependence, its different, right? Right. Its gotta be different. And the reason its different is because... fewer people use social media? No. Fewer people suffer from mental health issues? Definitely not. Fewer people have overtly negative outcomes, including death, from mental health issues? Nope.
So, actually, the only reason its different is: We've fully studied and have a mature grasp of the personal and social impact of some chemical dependencies. The same can not be said of mental health, and all the evidence you need for this immaturity can be found in comments like some in this thread, or those from Zuckerberg.
The cigarette companies are commissioning studies saying smoking is healthy, and people still downplay the victims. If you've ever wondered how anyone can think smoking is healthy; in 10 years, some may look back to the '10s and wonder the same thing about social media. How could anyone have worked for Facebook, defended them, saying they've done more good than bad?
Of that, we are in agreement; one is popping a pill which releases chemicals in your brain which over time correlate with a measurable reduction in quality of life; the other is interacting with an app which releases chemicals in your brain which over time correlate with a measurable reduction in quality of life.
Definitely on the same page; not a strict mathematical equivalence.
We haven't banned cigarettes or alcohol, but we have put labels, restrictions and public ad campaigns into place to govern the behavior of their sellers, and to inform the public of their dangers.
I think it is a terrible place where government does not have a responsibility, as the union of the people, to help inform citizens of the dangers of addictive products and to regulate unethical behavior of the sellers of such products.
Your perspective completely ignores the intentionally addictive design of the products, and the network effects of having everyone and every business you know also using it.
There is no amount of tobacco, and probably no amount of alcohol, that has any health benefits. It's completely toxic.
What you're advocating is more like suggesting we might also consider putting labels on cheese about the risk of eating too much saturated fat, or heck, a warning on most green vegetables for people taking Coumadin and other blood thinners.
The warning labels anyway are not placed on these things you mention on the basis of addictive qualities.
Please cite the health benefits of doomscrolling twitter, playing candy crush for hours, etc. etc.
If it is proven that these companies have internal research acknowledging negative psychological effects of this addictive behavior, and ignored it and the widespread social damage it does (the social media specifically, not candy crush), then it would be as big a bombshell as the tobacco industry in the 2000s.
So I disagree on your distinction in the first line.
I would use the example of putting warning labels on sugary foods and soda more than I would about cheese.
Why? Because I disagree with your final line, too. Cigarettes wouldn't be such a problem if they weren't proven to be addictive. It's precisely because they are so habit forming and unhealthy that warnings are necessary.
TLDR: The thesis here is that Facebook has internal documents acknowledging that the system they have created has negative psychological and societal consequences, they know it, and choose to obfuscate it rather than eliminate it.
As more information comes out, it doesn't seem unreasonable for governments to step in and at least have public awareness campaigns on the dangers of social media overuse. It will be tough because it's not as clean cut as "this is your brain on drugs", but it is necessary.
And finally... I don't know how we do it as easily with traditional media, but we collectively have to recognize the patterns of rage-bait and journalistic malfeasance that cable news is the leader in committing, especially Fox. The more traditional "news" label being applied to them, government intervention is a much trickier thing, so it may be more wise to come at it from a different angle.
Pardon but network effect much? In my area, FB is a huge resource for community groups and news. It would probably make local parent life more difficult to unilaterally cut off FB.
It's a good market opportunity actually. Too bad NextDoor is basically known as "racist people alerting their neighbors that a black person is out walking their dog" or some nonsense.
There are places in the world where these services are the only option if you wish to engage in the US equivalent of simple text messaging. I have been "that guy" that doesn't want to use the platform the rest of the social group uses, and it sucks.
Is it so much to ask for honesty from a company these days?
Because they're authoritarians. There's a large cultural push by people to use government to bludgeon everyone else with their values.
I'm not sure when it switched but it's seems to be more and more acceptable to force people into acting a certain way. It's really anti-american and anti-progressive imo.
A government controlled social media ripe for propaganda seems far more terrifying to me than what's currently on facebook.
FB has money to buy the smartest developers, psychologist, and advertisers to make you buy stuff, keep you on their platform, and make you feel what they want you to feel. Is it as addictive as heroin? Probably not, but if they can do that they probably will some day. Of course they should be regulated.
if you don't like being poor, just be rich, don't know why so many people here think parents and adults are not capable of regulating themselves and their kids.
The problem is roundabout externalities. If you're a genocide victim in Myanmar, whether or not you stayed off FB is irrelevant.
The other problem is informed consent. Kids don't have the mental defences to avoid the dopamine trap. Whether parents allow the kids is beside the point. A parent can allow a kid to have sex but it's still illegal because the kid isn't capable of consenting.
At first I thought that was a bit too facile, they have nothing to do with each other, but ... is that true? Other people using Facebook to spread hate and misinformation (including vaccine misinformation) does affect me. They affect who gets elected (or appointed) and what policies get enacted. We're already seeing real tangible effects at the state level, and with the 2022 elections we might see more at the federal level. (That's just the US. The same is absolutely true elsewhere, but it's harder for me to come up with examples that are both accurate and familiar to most readers.) In the sense that they both affect public health - political/economic/social in one case, physical in another - hate/misinformation and vaccine refusal are similar. And for the same reasons, "if you don't like..." is an unhelpful response.
Yeah you're right so you should have the same issue with obese people. Actually you should have MORE of a problem with them than un-vaccinated since they've been a consistent burden on the system for far longer.
Getting pretty tired of the hypocritical and illogical dissonance displayed from people who should be smart enough to think the ENTIRE picture through on their own.
Guess what? It's even easier. You don't even need a shot you can exercise and eat healthy. I want healthy eating and exercise mandated. Since we're all having fun with authoritarianism, problem solved right????
Why don't you go ahead and look up the top 10 causes of death worldwide and then let me know how many of those are partially or directly related to obesity.
The entire point of my comment is people with your stance are focusing on the un-vaccinated and turning them into some evil boogie-man that needs to be eradicated when it's completely silly if you zoom out and look at the big picture.
THEY AREN'T AFFECTING YOU. Please stop the hypocrisy. I'm so tired of it.
What @heartbreak said, but also far more. Vaccines aren't 100% so the continuing circulation of the virus will cause some number of those to get sick. Then there are variants. Kids being sent home because someone else tested positive. Travel restrictions. The list goes on. None of this should be news to anybody who has actually been paying attention, and such a person better not be saying "do your own research" to anyone else.
That's called the fallacy of the excluded middle (sub-species "argument by demanding impossible perfection") and it doesn't negate my own point at all. No, 100% elimination isn't possible, but every reduction helps and literally saves lives.
> The un-vaccinated aren't restricting your travel.
Simply untrue. There are still countries that won't let me in because I'm from a country with too-high case levels. There are more that I wouldn't go to because I don't want to be in a small confined space for hours on end with people whose oppositional defiant disorder has made them dangerous. I would have liked to visit family in New Zealand this year (from the US). At one time that might have been possible, but no, because of people who are too selfish and/or arrogant to do what's obviously right for public health.
>That's called the fallacy of the excluded middle (sub-species "argument by demanding impossible perfection") and it doesn't negate my own point at all. No, 100% elimination isn't possible, but every reduction helps and literally saves lives.
It entirely negates your point since it's predicated on the the fact that un-vaccinated will supposedly continue eating up hospital resources. There is no future where this will stop until the disease has run its course. Therefore, your point is moot. Even more so because you have no objective number of vaccinations or hospital cases that MUST be met.
>Simply untrue. There are still countries that won't let me in because I'm from a country with too-high case levels.
You really seem to have an issue grasping that those restrictions are coming from the governments of those countries. They are not coming from un-vaccinated people. What's happening is that you've let them create a boogie-man and brainwash you into placing the blame away from the people who are causing your issues. This is how fascism and authoritarianism spreads. I'm sure you can think of a few times in history where certain leaders singled out specific groups for blame and how that worked out.
>There are more that I wouldn't go to because I don't want to be in a small confined space for hours
That's on you. No one else has a responsibility to protect you. If you don't trust science or the vaccination then it sounds like you have some major health anxiety that needs to be dealt with.
If you are vaccinated and you're going into a country like NZ there is ZERO logic in saying you cannot enter due to your country of origin having a higher case load. This absolute stupidity. Place the blame where it lies. NZ is full of anti-science morons, it has nothing to do with un-vaccinated people in the US.
Give me hard objective numbers that must be met for all restrictions/lockdowns to be dropped please. I'll save you some time. THEY DON'T EXIST. This means they will continue to string you along blaming all of their authoritarian tactics on specific group of people that aren't actually doing anything to you.
Let me guess... you also blamed the kid who talked too much in class when the teacher took your pizza party away? It wasn't the kid who took that from you.
> it's predicated on the the fact that un-vaccinated will supposedly continue eating up hospital resources
No, it's predicated on the fact that they will continue allowing more of the virus to circulate and infect others (and also incubate variants). Don't pretend to mind-read. It's annoying and unpersuasive.
> those restrictions are coming from the governments of those countries
...which is perfectly valid and even their duty unless we assume that the virus is harmless. I'm clearly not the one who has trouble considering more than one possibility.
> it sounds like you have some major health anxiety
> you also blamed
>No, it's predicated on the fact that they will continue allowing more of the virus to circulate and infect others (and also incubate variants).
Cognitive dissonance. You admitted the vaccinations don't stop infections. Therefore, variants will continue on whether people are vaccinated or not. There's quite a bit of irony in you prattling on about the big bad un-vaccinated while simultaneously not believing the vaccine provides you enough protection to go out and be safe.
>Ad hominem. We're done.
No it's an objective observation based on your comments about being terrified of other people infecting you with a disease that has a lower likely hood of killing or harming you in any major way than driving in your car daily (on top of many other more deadly daily activities). I actually think you need to talk with someone about your issues. I'm not attacking you for it, but it's obvious your stance isn't based in reality.
Life is full of risk. COVID isn't an issue that ought to be worrying you at all if you've vaccinated yourself.
I am very conflicted about this situation. On one hand, I recognize that Facebook may be a sort of anger multiplier when people scroll only through content that reconfirms their biases and buries said people deeper into their "truth bubbles". I get it.
On the other hand, the two main pillars of this story: "won't somebody think of the children?" and "the democracy at risk" have been used to justify authoritarian measures all over the world. I do informal surveys of election news in some countries and often find that "the democracy is at risk" really means "people will elect somebody I don't like".
China has recently banned effeminate men from appearing on TV primarily due to a concern that "Chinese boys' masculinity was under attack".
I don't know this person and her politics, so I will not make judgments yet. I'd be disappointed if she is revealed to be an aspiring politician or one of those people who think every company they work at needs to follow their politics and moral compass.
Edit: I have visited her Twitter page and frankly was a bit disgusted. It looks like a corporate page readying to spring to a world of newfound fame through speaking gigs and book deals.
It goes far beyond electing people you don't like and to leave it at that is either brutally ignorant or willfully ignorant. thanks to facebook you have people refusing vaccinations and injesting horse dewormer. in the before times that sort of behavior would be relegated to those crazy few who would read tabloid rags, now we have police and fire departments who are unable to convince their first responders to get vaccinated.
Now do the same for CNN, Fox News and every other sophisticated media company who have all created Skinner boxes to get more eyeballs. The entire system needs a hard examination.
This is straight-up deflection. It makes sense to start somewhere, so why not the biggest player in town? That would be Facebook, with billions of active users. None of those cable channels are even close to 100s of millions of viewers.
I mean, I think this should be an 'and' thing, not an 'or' thing. Don't deflect from FB. But when FB's dirty laundry is aired, air these other groups' too.
Easier to set a precedent against one org and use that to enforce good behavior towards other orgs in the future. It's how we've always gone after corrupted industries in this country. If you go after every bad egg at once you aren't going to win.
Considering the claim that FB committed securities fraud based on the evidence we've seen so far is pretty laughable, it appears her testimony is designed to encourage new laws and regulations for (social in particular) media companies. As such a conversation about the industry at large and its practices makes sense.
None of those cable channels are even close to 100s of millions of viewers.
That is not fair at all. TV networks are local, there are only 330M ppl in the US. Also, you are not counting how many viewers TV networks and their employees reach via FB/Twitter.
No, calling out more fundamental and broker problem is the first step to address the root problem.
Otherwise, you are just whacking moles, and pretending that one day there would be no more moles...
Or rather, you indeed are OK with a constant number of moles indefinitely. That's OK. But the premise of the parent is obviously that the moles are growing too numerous and are not staying constant at all...
What will you see? 24-7, Emotionally charged "Breaking News" which is simply a different fruit from the same horrible tree. CNN and Fox News are just using different methods based on the data they have available which comes from focus groups, Nielsen, and a whole cottage industry that most people are unaware of.
> What will you see? 24-7, Emotionally charged "Breaking News" which is simply a different fruit from the same horrible tree.
Right, but we know what they are showing people. We have no idea what Facebook is showing people. There's no equivalence there. Yes, both are bad, but in the context of these leaks equating Facebook to CNN is delusional deflection.
It’s not deflection. You are pointing out magnification. It’s the same concept. Facebook just has far greater granularity in their ability to target and deliver personalized content. CNN and Fox News is still segmenting content and targeting users. That’s why they can both coexist. I am not arguing that Facebook isn’t far more sophisticated. Newspapers < CNN < CNN.com < Facebook.
You're doubling down on your whataboutism and deflection. The key difference between Facebook's "news" feed any mainstream television and print news is visibility.
If I record every minute of cable news and subscribe to a bunch of newspapers I can tell you the exact contents of each. If any of them wants to espouse a narrative it's pretty easy to see just by looking at what they've aired/printed.
That's not possible to do with Facebook, unless you are Facebook. Every users' "news" feed is slightly different based on their behavior/relationships tracked on Facebook and through their massive advertising network (off Facebook).
I put "news" in quotes because Facebook tunes the contents of their feeds for monitization rather than any semblance of truth or accuracy. Facebook is awash in literal fake news, as in completely made up "news" articles, because they peak in some engagement metric.
An actual news program on cable or non-opinion news article in a paper can't get away with outright lies. They also would be liable for outright defamation.
For all the ills and failings of cable news and newspapers they are nowhere near as toxic or fundamentally broken as Facebook's "news" feeds. They're not even in the same ballpark which makes your whataboutism really puzzling. Do you honestly not see the difference?
You are pointing out scale and impact which I agree with. I am pointing out economics. Both have incentives which are driving outcomes that are undermining civility and cooperation.
The scale and impact are an important distinction which you're ignoring in order to conflate two unlike things.
A cable news network has very little ability to perform dog wagging. While they can use incendiary lead-ins for stories or hire divisive commentators (not news people) to get viewers to stick around while they run ads, they can't really drive anyone to buy a Chevy truck or shop at Target. Cable news also has to toe a line of legality with respect to defamation, see for instance Dominion's cases against Newsmax and OANN. For as bad as you think cable news networks or newspapers are, they do have some accountability.
Facebook on the other hand has little to no accountability for "news" they push. These leaks show they literally know that peddling falsehoods and divisive content is harmful but choose to do it because it makes more money. They also have the ability to wag the dog because their algorithms stick people in echo chambers that just reenforce negativity. They can and do drive people to radicalize because incendiary content drives "engagement" which makes Facebook more money.
Suggesting Facebook and cable news (or newspapers) are somehow comparable in their toxicity is a bit ridiculous. You're also doing so in a thread about a Facebook whistleblower. When a CNN or Fox News whistleblower leaks a bunch of documents saying they knowingly aired false or defamatory content to make more money, you can tell about them all you want.
Easier said than done though.
My wife works for a massive company that now makes you classify every email as external, internal, or confidential and after numerous emails, training and constantly calling people out on things, nobody can still figure out the difference (and thus marks everything far less secure than it is), despite it being trivial.
That's pretty much what happened after Snowden, everything is much more locked down, access is tightly controlled and employees are more closely monitored.
That's why I'm glad he released as much as he did, any followup whistle-blower leak is bound to be much much harder.
Well if the precedent from this is any employee can leak whatever they want, violating their NDA and claim the company is committing securities fraud by having secrets why wouldn't they?
My tinfoil theory about the outage yesterday is that it was done on purpose, as a way to create an opportunity to "hide" as many internal documents as possible without anyone noticing.
First reaction: That really is tinfoil, if you don't include how the outage would help in that process..
Second reaction: Huh, employees were locked out of their offices, VPN was surely down, yeah if there was someone inside the data centre deleting files off their Intranet no one from outside the data centre would be able to notice, due to the lack of connectivity.
But it would have to be a very good scrubbing and people would notice things missing anyway, "Hey wasn't there used to be a PDF here..?". Hah, https://en.wikipedia.org/wiki/Memory_hole
What concrete information has been revealed that implicates Facebook in illegal behavior?
How does the "whistleblower"'s stated desires wrt goverment-mandated moderation standards differ from those articulated by representatives of the company itself? Such a framework would shift liability away from the company, possibly reduce their moderation load, and create a significant barrier for entry for potential competitors.
> What concrete information has been revealed that implicates Facebook in illegal behavior?
I'll leave it up to the legal system to decide what FB has done that's currently illegal. However, you don't just blow the whistle when you think laws are being broken. Sometimes you do it to start a discussion that can change the laws.
It's clear from the Facebook Files stories and documents that FB has known about the effects of their product on consumers (especially kids) and 1. lied to the public 2. lied to lawmakers 3. didn't change the product in ways that positively impacted kids.
That's obviously messed up and is not (afaik, ianal) covered under current US laws. I do strongly believe that it should. Platforms have responsibility, and S230 entirely takes the responsibility away from them.
Will the "lying to shareholders" argument hold up under scrutiny, though? The putative knowledge they supposedly withheld is extremely vague, and what's more, pertains to very sensationalistic topics. The delta between appearance and reality for these revelations' severity couldn't be greater.
I worked with her briefly on Google+. She struck me as a trust-fund kid: idealistic, brash, (perhaps over)confident, intelligent but prone to tackling problems that can't be solved. Wikipedia says her parents were a doctor & academic-turned-priest, so she's likely got some family wealth backing her up. It's a different calculus than for most of the people here, who can't afford good lawyers and need that paycheck to survive.
She is taking a risk, and I salute her for it. But at the end of the day, it is likely her career will have a huge boost from this. And Facebook would probably be very foolish to go after her.
Not like Snowden, Kiriakou, or Binney. This whole thing has been carefully orchestrated. She probably has a position at the Brookings Institute waiting for her but not before she completes her podcast/talk show book tour.
In the 60 minutes interview her lawyer cites part of Dodd-Frank that protects whistleblowers going to the SEC. Their position is that the documents are material to investors re:valuation and that Facebook was negligent by not providing it.
As the famous Levine's quip goes, "everything is securities fraud". From first reading of the article, it seems to me she just wants to nail Facebook with SEC, with everything else being just noise to get media attention.
What I don't understand is how this is different than Nike making shoes with child labor overseas in the "global south", etc. That harms children, for profit.
Facebook is a for profit organization. This is the rule with organizations like this. If given enough time, size, and lack of shielding, any corporation will eventually cause harm in some way.
We seem to have less focus on "Nike whistleblowers" or similar, you get the idea.
Nike has received (and continues to receive) a lot of attention about that sort of thing for decades.
But, on a larger level, I see several comments that seem to be implying that talking about the wrongdoing of one company isn't valid unless we talk about the wrongdoing of all companies at the same time.
I think that argument is faulty. If the argument were good, then it wouldn't be possible to talk about any company's wrongdoing because you'll always be leaving others out of the conversation.
It's a very different issue. One important difference is that the children being harmed here are American children. Rightly or wrongly, it raises the priority since American parents are seeing their own children harmed.
I don't want to defend Nike, but still, the fact that people are poor enough to send their kids off to factories to make shoes isn't Nike's fault.
But again, very different issue. Worthwhile to think about, but I don't think it should detract from this one.
This is not about children, though, it's about investors.
If Nike makes a material claim to investors (e.g. that children labor is not used), but the claim is revealed to be knowingly false, then that's securities fraud.
Or maybe have no moderation at all? I was under the impression you only see posts from people you follow, so if you're an adult, why require any moderation at all except for not showing what your country deems illegal?
> If you can't moderate your platform, you shouldn't have a platform.
what counts as "moderate"? only removing illegal material? removing fake news as well? maybe remove questionable news as well, as long as it's towards the "greater good"?
Firstly, it is sad to say that HN is becoming more negative like the rest of the internet. On the front page there are more articles devoted to the ever-evolving shit-show of American-focused news issues. There are far less links to things that I THINK HN is more suited for: like BGP protocol or building your own ham radio.
That aside, my theory about whistleblowing is that it is a counter-intuitive exercise that results in very little at the expense of orgs tightening their security policies. Case in point: Snowden and the NSA
Leaks don't seem to happen after the first one. One or two small bills to "change a law" doesn't fix an endemic problem.
Facebook will continue after this blip. They have enough money to spin the PR in their favor and to grease the hands of their political-dependents there in Washington.
Two of the front page articles right now are about BGP - one about exploring it, one about playing Battleship over it. That seems... relevant?
But we all have to live in a world influenced by social media, Facebook is really the most overtly evil of them (What's Good for Zuck is Good for Zuck! seems to be their guiding principle lately - anything for more ZuckBucks), and as it comes out that they've known that what they're doing is evil, and continue doing it? This is relevant tech news.
And, yes, I'm exceedingly "negative" about social media anymore. The downsides in terms of ripping apart society outweigh the upsides of making a lot of money for a few people.
> That aside, my theory about whistleblowing is that it is a counter-intuitive exercise that results in very little at the expense of orgs tightening their security policies.
Oftentimes this is the point. An organization with tighter internal security policies and lower levels of trust internally is significantly less efficient. Over time, this leads to them being unable to respond to competitive pressures and then getting eclipsed in the marketplace. It's not that the whistleblower kills the organization, it's that the whistleblower triggers the organization into killing itself.
This was the explicit goal of Wikileaks and of Osama bin Laden. They knew they couldn't take down governments themselves, but they can make government so inefficient that their own citizens take them down.
facebook, inc. is not just an amiercan issue. For years they have been nefariously bribing other countries to promote their website.
Take for example internet.org internet takeover attempt in India, or a better example, in Brazil facebook, inc. bribes local telcom providers to provide whatsapp access for free ( users are not charged data usage )
their agenda is clear. abuse and lie through their teeth, making as much money and power as possible.
It’s because that blockchain someone made is being used as national currency somewhere (and contributing needlessly to global energy consumption), and because that photo sharing site is causing body image issues, and that ad-tracker is building a digital trail of everything you do, and that ML algo is identifying protestors, and and and …
We are not discussing the shit show, we are the shit show.
The headline on this hacker news entry says "Incriminating". I didn't find that word anywhere in the linked npr article. Is what has been leaked in fact incriminating?
>...allegations involve the difference between what Facebook knew about its platform and what it said publicly. He said misleading investors is a crime under U.S. securities law.
yes, she posits that Facebook materially misled investors and shareholders (illegal in the US) and shared it with the SEC under whistleblower protections.
That’s a good question. Leaking internal company documents doesn’t automatically grant someone whistleblower protection. She appears to be trying an angle where she claims that Facebook’s activities have harmed shareholders in an attempt to capture some degree of whistleblower protection, but that’s a huge stretch given that she’s arguing they made choices in the interest of profits without violating any actual laws.
At this point, I think her best chance is to hope that Facebook will simply try to minimize the legal issue to avoid making her into too much of a martyr.
It's not really an "angle", she worked with a whistleblower protection organization and she specifically gave these documents to the SEC, filing 8 complaints with the regulatory organization. All of that communication is protected.
I don't know what the legal implications for leaking to the press are, but that's where her exposure is.
Her bigger risk is probably the employment angle. Hard to picture hiring this person except if you're 100% certain your company's behavior aligns with her morals, and you can't find anyone else. Even for people who generally agree with her view on Facebook employing her presents a pretty serious known risk at this point.
Oh, she doesn’t work in tech now. Her work will now involve television interviews and book-signing tours. From that jumping off point, she’ll be able to do what she likes.
I think that underestimates her abilities. I think it is more likely she will found or at least join a policy focused non-profit that will further her social goals while making use of her existing technical and management skills.
Also overestimates her abilities in a different direction. Becoming a media personality is hard, and takes an entirely different skillset. It just seems easy to people who have an axe to grind against the currently successful crop of personalities.
OP asked about the legal ramifications, not the social and professional implications of her whistleblowing.
But that said, I think there are a lot of cynical takes in this comment thread. I don't think Frances will find have a difficult time getting a job. There are plenty of people in tech who think what she did was admirable and are very proud of her, including her alma mater. Sure there will be many people and companies who wouldn't hire her, but there are also many who will.
That would be my assumption, or some kind of public policy position. Not to say that her concerns aren't genuine, but she has to be aware that for better or for worse she won't have an easy time finding further work in the industry.
She'll be fine. She is now globally famous for doing something an awful lot of people think is brave and admirable, and she has also shown off on TV how sharp and eloquent she is. There are a huge number of companies who will jump on the chance to bring her onboard.
I'm having doubts, seems that most startups and small/medium businesses will not care at all. Have you seen the market? Engineers are ridiculously scarce. Most hiring managers and execs will just say something like "Great, we're too small and don't do anything like Facebook. Just keep her in the code and restrict access to Google Drive"
I can't see her being a coder going forward. She is globally famous for taking a stand (from a very technically informed perspective) on policy issues. A smart company would hire her to be in a very public facing position, that will reflect upon themselves positively.
Obviously, a company that has a huge amount to hide isn't going to want to hire her.
Doesn't the fact that Facebook shares dropped %5+ yesterday give some credence to that argument? It's obviously more complex than that, but selling is how shareholders would express their agreement.
anything FB does against her will undermine the PR damage control campaigns underway. They may wait for all this to "blow over" and then try to go after her once their very expensive campaigns make some impact.
It's more likely that they will collude with other big tech firm and lobby for aggressive legislation against whistleblowers to prevent something like this from ever happening.
Edit -- there are reports suggesting the whistleblower is represented by the PR firm where the current US govt press secretary held a SVP role, so her case is unofficially aligned to the current administrations agenda: https://twitter.com/JackPosobiec/status/1445438141775683584
Seeing as how the actual mechanics of the leak were orchestrated by an attorney, and the documents were sent to an enforcement division of the Federal Government, it would be a pretty fair guess that they have considered it and are on solid legal ground.
If not, they may simply take the case because it will generate a lot of work for them, and it’s likely that such a public case could attract plenty of donations to fund the cause. Having argued a high profile case against Facebook is a huge reputation boost for a lawyer.
Potentially severe for leaking company IP. There are explicit legal protections for whistleblowing to the SEC but I don't believe there is anything protecting one's right to go to journalists with privileged information. However, Facebook would be 100% insane to press legal charges because that just drags this on even longer and reinforce the perception that they're bullies
I don't think I agree. If they don't enforce a precedent of consequences for leaking confidential documents, it will be a complete breakdown of operational security (everyone will feel free to leak memos and documents).
What could come out in court if they shut her up could be much more damaging to FB's reputation than it's worth, and it'd probably be hard to find a jury likely to convict her when she hasn't directly gained financially from it.
Depends how she play it. From a work perspective she sadly might have a tough time getting employed at other companies. It would depend on how she plays the next bit though. That said I wouldn't be surprised if theres a book that comes out of this given the amount of media already surrounding this.
I am curious what her longer term goals are - she is clearly intelligent, has solid PM experience in the industry and is certainly aware of what she is doing. I'm guessing there is a strategy of some sort at play - maybe leading an NFP for better corporate practices. Best of luck either way hope this changes things in a meaningful way!
My issue is that the government shouldn't allow companies - especially tech companies - to become so big to begin with.
There are ways the government can begin to minimize the growth of said companies. There can be taxes based on the amount of MAU, strictly prohibit any acquisition of other "social media" after a certain size, etc.
In my opinion Facebook should be forced to break up WhatsApp, Insta, etc. In addition, the new broken up Facebook organizations should be taxed heavily (call it a network effect tax, that's progressive and highly de-incentivizes being so huge).
Alternatively, the government could just deem certain internet activity "marked" and make it so all "marked" internet activity to require payment. This would include pornography, social media, etc.
That being said the challenge would be creating a reasonable definition of what "marked" includes.
The fact that the whistleblower has connections to the Democratic Party (donations, legal representation), and is calling for more censorship... makes me wonder about the possible ulterior motives
What is "incriminating"? It is legal to remove protected speech, but it's not illegal to not remove it. COVID and election "misinformation" are, for the most part, protected. In fact, the actual headline is just "Facebook's new whistleblower is renewing scrutiny of the social media giant," with no mention of incrimination.
The Hacker News version of the headline is misinformation.
A frustration is how often these discussions are America centric, and the whistleblower herself pointed out how little integrity work is done on most other languages - that integrity software is not even pushed to those regions. The specific example discussed was Ethiopia where facebook has integrity teams(?) for 2 out of 5 languages.
However, its remiss to say this is just a FB thing. Want to work on hate detection in any complex region. The ease with which you can do hate speech detection in English (with all its caveats) pales in comparison to working on commonly shared content in other regions.
I'd like it if he at least addressed the real criticisms face-on instead of at a glance:
>The argument that we deliberately push content that makes people angry for profit is deeply illogical. We make money from ads, and advertisers consistently tell us they don't want their ads next to harmful or angry content. And I don't know any tech company that sets out to build products that make people angry or depressed. The moral, business and product incentives all point in the opposite direction.
The actual argument is that companies like Facebook make money by increasing user engagement and that polarizing and outrage-inducing content is one way to effectively increase engagement (be it deliberate or recommender algorithms simply discovering on their own that this kind of content is effective for engagement).
He also glides over the issue of incentives and the algorithmic aspect. Simply saying "no tech company sets out to make people angry or depressed" is a truism that has no bearing on the argument. Of course their end goal isn't to make people angry or depressed. But is it an instrumental goal? Is it an unintended consequence that they nevertheless find useful, or at least find it too profitable and difficult invest 100% of their effort into mitigating it when it happens?
This is the kind of vague, wishy-washy, feel-good politics/CEO-speak that everyone hates. It's inauthentic and calculated. Have a genuine, honest, PR/lawyer handler-free discussion about this in public and then maybe a few people can at least try to throw your behemoth of a quasi-nation-state a tiny bit of the benefit of the doubt. With this kind of defensive, political approach, you'll do nothing but lose even more of the sympathy you're already so bereft of.
There is something that doesn't sit right with me about the whistleblower. Yes, she is a data scientist which gives her many many extra points. But, she was a PM for 2 years in FAANG. Her actual scope of the what was going on in FB as a huge org, as she's trying to comment on (the "I want to fix FB" quotes from 60 minutes)
Everyone I know who works in FAANG, FB specifically, and PMs even more specifically, really have little idea of whats going on in the bigger picture. They kind of understand their little piece of the puzzle, but even then things can get ambiguous sometimes (and this is intentional, as I understand the inner workings at FB)
Bringing her in to the media and Congress as this star witness that understands exactly whats going on seems misleading. People from outside this world are taking her word like we are hearing from someone at the C-suite or an executive who has been with the company for 10+ years
That, and she seems to falsely claim she is an officially titled co-founder of Hinge on her Linkedin
What's your point here? This lady is a PRODUCT manger and heads up the entire product, in this case Civic Integrity, which was directly charged with tamping down hate on the platform related to civic engagement. She is not a PROJECT manager like what you deal with on your engineering team. She is by definition an expert on this segment of Facebook and you're dismissing her wrongly for being a "PM".
She's a Harvard MBA with a degree in Computer Engineering and experience at Google and Pinterest. She's qualified to have a pretty significant role in the company and rightly so earned the job that she has.
EDIT: I looked into the claim that she co-founded Hinge and it's mostly accurate. She worked with Justin McLeod (Hinge CEO) to build "Secret Agent Cupid" which was Hinge before it rebranded with the launch of it's new mobile app.
Company strategy isn't only what C-level execs chat about behind closed doors. It's also the prioritization strategy that drives decisions throughout every level of the company.
Facebook focuses on "impact". Any engineer, be they veterans or just 6 months in, can tell you priority is always given to projects that drive "core" metrics up, and projects that dip metrics get axed.
If this simple decision-making strategy is what makes and beaks teams, what decides who is successful and who falls behind, and ultimately what gets built and what doesn't, it drives the company. And it's something any employee at any level knows.
The response was that she is one of the best people to speak on this topic.
And then your response dismisses the value of her credentials.
In context, her credentials rebut the claim that she has no perspective. A Harvard MBA and Data Science degree give her unique perspective and valid perspective on what she is talking about.
Further I have watched the Whistleblower deposition and she has said things within the ambit of what she has said.
Additionally I personally have a position to know this particular space and she has repeated things that many already know.
Her engineering degree from Olin is more impressive than her MBA from Harvard. If you highlight just one then highlight that. Every idiot with money can get an MBA from Harvard (this is going to be down-voted badly I know; but don't get me wrong, I worked with a lot of very smart MBAs but also with many idiots; the distribution is very wide)
Also, a Facebook PM is not a senior role. While she does have a lot of experience, I doubt she led anything significant at FB with a PM role. With her experience I would expect her to be at least senior PM if not Principal. Probably didn't do well in the interview or maybe concerns about lack of promotions anywhere she worked.
> Everyone I know who works in FAANG, FB specifically, and PMs even more specifically, really have little idea of whats going on in the bigger picture
That's exactly the thing about the banality of evil, any individual person working on some small subsystem or component of something in a FAANG, to them it might not immediately be apparent that what they're doing is enabling the corrosion of democracy and discourse. And enabling the monetization of peoples' attention into endlessly scrolling social media feedback loops.
It's the end result of all the work of hundreds, or thousands of 'engineers' and 'product managers' in facebook working together on their own projects, combined together in aggregate, that turn it into a monster.
You can also see the same thing in hardware and software engineers that work on some small discrete component of some piece of equipment in the defense industry. They might not agree with the total product as it's used in the real world (precision US weapons sold to Saudi Arabia and deployed in Yemen, for instance). But all that one engineer sees is the small item or subsystem that is within their scope of work.
Nothing false about those equivalencies. In the case of cigarettes and Facebook you have the exact same practice of optimizing for addiction with no regard for the harm it does.
The comparisons are completely relevant. All are companies that provide a product with significant negative societal externalities. The only difference is that social media has yet to be regulated in any way.
The similarities being talked about center around those companies specifically leveraging the mechanisms of addiction in order to make more money in spite of the fact that doing so harms people.
I think a very good argument can be made that Facebook (among others) are doing this very thing.
Let me put it this way: how would you go about measuring the harm that Facebook or any other social media company causes?
With cigarettes, oxycontin, or weapons manufacturers, you can look directly at the physical harm they cause. We have statistics and studies that require little effort to interpret.
With social media, it is nearly impossible to connect physical or psychological harm to a platform over the course of someone's life. The papers that draw conclusions in this space are based on limited studies and polls that could be influenced by any number of external factors. We judge these things based on little more than feeling.
Now, I think it would be great to be able to make informed decisions based on diligently collected data (and maybe that's what we should be fighting for) but we don't have that right now. Why, in this case, do we seem eager to throw scientific rigor and frankly due process out the window?
> With social media, it is nearly impossible to connect physical or psychological harm to a platform over the course of someone's life. The papers that draw conclusions in this space are based on limited studies and polls that could be influenced by any number of external factors. We judge these things based on little more than feeling.
Yeah, you'd need to some kind of A/B testing on unsuspecting users and see if you can manipulate their mental health to get worse. Fortunately this sort of thing would never pass an ethics review board. Unfortunately, Facebook either didn't have one at the time or didn't listen to it, because they literally did exactly that.
The misbehavior of Facebook has been well documented and has been going on for many years. Facebook has been shifty and deceptive in their responses, and there is literally no reason to give them the benefit of any doubt at this point.
Your comments about hard data are well-taken, but you talk as if we have no, or very little, evidence that harm has been (and continues to be) done. I think that we have a lot of evidence to indicate that there's a real problem here -- and one that Facebook continues to downplay. None of that evidence is as clear-cut and solid as physical injury is, but that's to be expected with social harm.
> Why, in this case, do we seem eager to throw scientific rigor and frankly due process out the window?
I'm not eager for that at all. On the other hand, the evidence we do have very strongly indicates (but does not prove) that there's a real, serious issue here. Are you suggesting that we should ignore that? If we waited until there was zero uncertainty on things before taking action, the world would be unacceptably dangerous.
Facebook could have helped on this front by being honest and taking the issue as seriously as they take profit-generation. But they chose not to, and now, after so many years of deceptive and abusive behavior, we have no reason to trust them anyway.
Humble Oil (now ExxonMobil) understood CO2 emissions caused by fossil fuel consumption were causing climate change on a catastrophic scale [1]. What followed were decades of denial, push-back and intentional disinformation. Only fairly recently have fossil-fuel companies admitted the product they have so eagerly promoted and sold for decades is destroying life on Earth.
Perhaps what's at stake here is not the careful balance of life on a global scale, but what's worrying is that it's a similar pattern:
- Something bad is going on. Researchers find it can attributed to the cash-cow of a multi-billion company.
- The corporation denies it. They have their own research showing it's good.
- Some time later, the truth comes out. The company knew all along that what they sold was bad, and any internal efforts to make it better were shut down because they hurt the bottom-line.
The planet isn't as stake here but a lot is— teenagers shouldn't be stressing, nor should misinformation keep spreading, etc so Facebook can keep milking its cash cow. The point of comparison isn't that Facebook is literally causing deaths, but that they are putting profits before the well-being of the people they know they're pushing a harmful product to.
The discussion around FB, and social media in general, has gone off the deep end. There is no nuance. Facebook is Hitler, Philip Morris, and Purdue combined.
More like 15 years. I've known Frances since 2009 when she was a PM at Google, where she worked on a few products including Google+. She also spent several years an Pinterest which, while not technically a FAANG, is certainly a major social network that does lots of algorithmic ranking. She is definitely an expert in the topics she is testifying about.
my point was, people who work for 2 years at FAANG, in 1 company, far from understand the complete scope. I did not mean she was 2 years into her career
It can take 6 months to just fully understand what a 7 person team does
After 2 years at Facebook, does she really understand the full strategy 4 levels above her? Her leak is super important, but she doesn't have all the answers the media is claiming she does
I think her fundamental argument isn't even really about Facebook specifically. It's about algorithmic content ranking being intrinsically harmful to society. The thing that's specific to Facebook is that they actually have a ton of research quantifying this -- which she has released -- and yet they are still all-in on it.
It doesn't seem like there's a lot of complexity to understand here about how Facebook works as a company. Their commitment to optimizing metrics over all else is well-known already. The interesting thing is the research showing how harmful that is.
> In June 2019, she joined Facebook. There, she handled democracy and misinformation issues, as well as working on counterespionage as part of the civic misinformation team, according to her personal website.
So she's not just a random "PM" in Facebook; she was intimately involved in what she's talking about. Please stop trying to spread disinformation.
To me at least, something feels very off about this whole situation. Out of the blue some larger-than-life person comes out of the woodwork and is lauded with attention while the big news outlets make this massive push against Facebook, all the while congress is holding hearings and a massive outage happens at Facebook right after the New York Times published an article titled "Facebook Is Weaker Than We Knew."
Edit: She is remarkably calm, well-spoken, knowledgeable, and articulate for someone testifying before the Senate for the very first time - all while being broadcast around the globe, live on television. Perhaps she's simply a natural, but I sense she received some coaching and preparation beforehand.
Combine all of this with politicians chomping at the bit to fight online 'misinformation' and I become very skeptical.
It certainly could all just be a perfect storm and Mark Z. has some terrible luck. But again, my intuition is telling me there is something coordinated going on here.
>Everyone I know who works in FAANG, FB specifically, and PMs even more specifically, really have little idea of whats going on in the bigger picture.
First, I disagree with your assertion, based on my experience in, well, working anywhere. Two years as a PM in the field on which she is blowing the whistle is not a lack of credibility (others are saying her PM experience at FB dealt with the kinds of issues that would make her pretty informed).
Second, if what you say is true, then what kind of Hell is Facebook? This behemoth of a company, this pillar of society is somehow so large that only a handful of privileged overseers can possibly understand its mechanics?
While as a PM (or IC) she'd be primarily working on a scoped area of work, the reality is also that she likely didn't come to learn all this on her own. At nearly any large tech company it's pretty easy to get in touch with someone on another team and learn more about their work, etc.
The fact that she's been able to get all this context and significant supporting documentation points to her not having gone about this totally solo (despite her claims).
There's also something to note about things being intentionally ambiguous-- that's by design to prevent most ICs from putting together broader context. But it's not clear that it would prevent a highly motivated employee from amassing broader context. Like I said before, with even a bit more context other ICs can be looped in to fill in the gaps.
Right, but my point is that even if she ran the entire civic engagement group it's naive to think she couldn't find out more about what's going on in other orgs if properly motivated. She'd have to go out of her way to do it and do a fair amount of digging but it's far from impossible.
Maybe she read the internal docs and meeting notes? Most employees wouldn't notice an proverbial elephant farm in another department aslong as it didn't involve them directly. Even if it was advertized in mails and on billboards ...
She has proven evil happened, we're now asking who directed it. The first thing we need to know is who did she report into, and who the most senior person in her teams meetings was.
From there just keep going up, it will stop at some point (I have no doubt there will be zuck martyrs, just wondering at which level)
The importance here is she understands the B2C model for revenue which is to test and test some more and let the date optimize your funnel. That’s the story and that method has lots of unintended chickens coming home to roost
So, what, we should just wait until Zuckerberg decides to get his act together one day and admit he's been a wee bit evil?
The whole point of whistleblowing is so the average worker cna tattle on the unethical decisions made by leadership. The C-suite isn't going to volunteer for the guillotine.
The obvious options:
- she went in with an agenda from the beginning intending to get this stuff
- she went in, discovered that the sociopathic pattern she'd been warned about and had a nagging doubt about was worse than she though, got the religion, and got the stuff
As an aside,
Facebook hires smart. Both of these and her success are consistent with a smart and motivated person. Isn't that what we're supposed to be selecting for? Isn't drive and high functioning what Facebook is filtering for?
More: isn't setting aside moral misgivings and agonizing, in pursuit of achievement of your mission, EXACTLY what Facebook is trying to filter for...?
Always a shame when the sword cuts the wielder!
But back to the point: does it matter which of these is true?
Not to me, or to democracy; the bottom line is that it takes an action like this to force the endless malfeasance, amorality, and actual destructive behavior into public consciousness. Not least when fighting a machine that seeks to stifle criticism and control narrative: this is indeed exactly what she is bringing receipts on.
The supposition seems to be that she is a plant, sent on a mission to bring down Facebook.
Let's say that's the case.
I have no issue with that, as Facebook belongs down.
I don't care who paid for this skullduggery—especially if it's the US taxpayer.
The GOP has done everything it can to curtail the ability of the state to challenge the power of accumulated capital.
I would applaud a game leveling asymmetric warfare style methods to do the state's work.
Indeed, this would be a remarkable and rare return on taxpayer money, should it bring about the dismantling of their profoundly caustic monopoly.
Let's assume Facebook does invest into policing their platform to the extent necessary for it to not result in political consequences ("misinformation and disinformation"). There's still the rest of the internet. There's still all the open-source tools available that can be used for good or ill. The problem is much larger than Facebook.
I read a line in the NYT article that makes me judge all software engineers at FB
~~~~“She didn’t know how basic stacks worked,” wrote one Facebook engineer, referring to a term used by the engineering team to describe how data is structured in computer programming. He said all of her testimony should be disqualified.~~~~
For anyone like me who actually likes keeping up with friends / signing up to groups and events on FB, check this FB alternative out: https://mewe.com/myworld
The subscription based revenue model by itself makes it hugely different from FB and unlikely to evolve similarly. And while there are of course network effects etc. , this one seems surprisingly large already, so it might just have a chance...
Heaven forbid parents be held responsible for their kids' mental health. We saw this with rock 'n roll, video games, and marijuana. Your kid saw someone possibly wearing a more expensive outfit on social media? Scarred for life, right? Imposing stricter age verification to clamp down on trafficking and the like is perfectly reasonable, but this social media demonization is the latest moral panic, and I hope it dies down before Congress does something stupid.
Am I the only one who found the Facebook Files reporting very overwrought and stretched? There were some issues and red flags in there, but mostly it seemed... not that bad?
The link claims that thousands of documents were shared. Wonder if we will get a list of what these were at some point - perhaps just title if not the full docs.
She said that she began thinking about leaving messages for Facebook’s internal security team for when they inevitably reviewed her search activity. On May 17, shortly before 7 p.m., she logged on for the last time and typed her final message into Workplace’s search bar to try to explain her motives.
“I don’t hate Facebook,” she wrote. “I love Facebook. I want to save it.”
From what I remember, right around the time FB was picking up, MySpace was facing a lot of scrutiny especially around predators preying on children, which was one of the main, if not the only, reason for MySpace's downfall.
The time is ripe for a competitor to enter this space.
No. Beyond USA, Facebook and its app ecosystem is used by people all over the world as a way to do business, this is especially true in developed country, people sell goods via Facebook, adversative and communicate to their customers via Facebook... MySpace never had as much reach and was never central to anybody's life. For many people all over the planet, Facebook and Whatsapp are the only apps they use. A lot of people on HN, because they are westerners, completely fail to understand that and only see how Facebook "fails to moderate the speech they don't agree with".
> MySpace was facing a lot of scrutiny especially around predators preying on children, which was one of the main, if not the only, reason for MySpace's downfall.
No, Myspace failed mostly for racist reasons, ironically, when young white educated people left Myspace for Facebook when the latter was deemed, and I quote, "less ghetto".
As a technologist, to me this is technology being disruptive without sufficient control. In this case disruptive to human communication. And FB doesn't also have a comprehensive suite of human ethics sufficiently in their loop.
Why did FB do this: She identified a disconnect between the FB Civic Engagement team recommendations and FB acting on those recommendations, in one of the 60 Minutes videos.
She says people at FB aren't trying to be evil, however:
> "... also detailed how she says Facebook quickly disbanded its civic integrity team — responsible for protecting the democratic process and tackling misinformation — after the 2020 U.S. election."
What ethics does FB corporation stand for, factually? [Spare the cynical "profit" ethic - other than that.] Is it "connecting people for .. " some reason?
I mean disbanding the civic integrity team after a huge civic event seems reasonable? It could have been the team was more of a task force and comes together when large events occur. We don't really have any/much insight into how they spun up a war room for the 2020 election.
A related situation might be how Lyft/Uber have massive war rooms and special ops teams to handle Halloween and New Years (massive heavy traffic events on these platforms). There is a special ops team and several marketplace teams that get pulled in, but they get "disbanded" and go work on more standard things after
Unless that civic event had significant unresolved elements such as the failure of the sitting president to actually concede the election but rather to contest it.
And it's not like Facebook isn't a global company that deals with a more or less constant deluge of important elections in many countries.
Not to defend FB, but could we please dispense with the notion that she's an actual "whistleblower" at this point? She retained an ultra expensive law firm (same lawyer as in Ukraine-gate), a PR firm, got 60 minutes interview, got verified on twitter within 2 days, got a congressional hearing within a week, and is currently organizing a PR tour in Europe. It's pretty clear this is being funded and directed externally, in a coordinated fashion, in an attempt to ensure Zuck's full compliance ahead of the midterms, without which they are going to lose. Greenwald wrote about this on his Substack earlier today: https://greenwald.substack.com/p/democrats-and-media-do-not-...
ShareBlue shills are out in full force: first downvote within half a second, a personal record.
I'm not outraged or surprised by the revelations. Everyone knows there is toxicity on Facebook and other social media products.
Social media is sometimes toxic because people are sometimes toxic. Moreover, people are drawn to toxicity because it is grotesquely fascinating. This is also the case for movies, video games, and other media; much of it is anti-social and misleading.
Condemning Facebook for the toxicity of some of its users is like condemning the manufacturers of mirrors because you don't like what you see in them. If you are appalled by society's propensity for producing and consuming toxicity, then consider directing your attention to shortcomings of our education and healthcare systems rather than a company that is simply providing a useful service to its users and value to its stockholders.
EDIT: I removed a misquote from this comment after another user pointed out my mistake.
There do appear to be a few legitimate concerns worth investigating, but it’s starting to feel like the media has sensed that Facebook is the villain du jour and they’re throwing everything at the wall to see what sticks. These stories seem to dance around the subject of what exactly Facebook did, but instead focus on the existence of a whistleblower. If there was a story here, I feel like they’re working hard on overplaying their hand at the risk of losing the audience once the initial frenzy wears off. That’s fine for media companies who can move on to the next bogeyman, but it’s not going to help the underlying cause.
Did you actually read the article or are you just trying to create controversy? That quote comes from a Facebook spokesperson trying to push back against the negative attention.
> Facebook issued a lengthy statement from director of policy communications Lena Pietsch titled "Missing Facts from Tonight's 60 Minutes Segment."
> She pointed to Facebook's investment to monitor for harmful content; disputed the way Facebook's own research on teenagers' mental health has been reported; and rejected the claim that the social network has furthered political polarization.
How can you say "If there was a story here, I feel like they’re working hard on overplaying their hand at the risk of losing the audience once the initial frenzy wears off." when you have clearly demonstrated that you didn't bother to read the story in the first place and just like to cherrypick quotes out of context to prove a point?
EDIT: And now the OP has edited their post but still stands by their claim that this is a nothingburger, without evidence this time.
For context, the OP originally quoted "She pointed to Facebook's investment to monitor for harmful content" out of context as evidence of something good that facebook did but this article is trying to trump up into something bad.
Because at the time it was the top comment making speculative claims and taking quotes WILDLY out of context. Accusing the author of trumping up an issue when the OP is quoting the rebuttal from the company is not just misleading but harmful misinformation and should be rightfully called out
I have to say it's extremely bad form to retroactively completely reedit your post so that your original statements (which were shown to be wrong) don't show anymore.
I removed the misquote and thanked another user for pointing it out, but the rest of the comment still stands. Deleting a comment isn’t an option after people reply, so I don’t know what you want me to do. I apologized, removed the mistake, and thanked someone for pointing it out.
> Facebook issued a lengthy statement from director of policy communications Lena Pietsch titled "Missing Facts from Tonight's 60 Minutes Segment."
> She pointed to Facebook's investment to monitor for harmful content; disputed the way Facebook's own research on teenagers' mental health has been reported; and rejected the claim that the social network has furthered political polarization.
You are quoting something Facebook used as a defense
Frances Haugen stands to earn hundreds of millions to billions of dollars from the SEC whistleblower award. That seems relevant to the conversation when she's accusing FB of prioritizing profit over the public good. What are her priorities?
In terms of teens' mental health personally I feel that school affected me more negatively than social media. I would be surprised if the majority of those who responded to the surveys didn't feel the same. It seems the likely result (if any) of all this will be to further entrench FB by making onerous regulations that only they can afford to comply with.
Imagine unironically whistleblowing and stoking outrage that the censorship isn't enough.
It is bad that the company knew that Instagram is harmful to teenagers' mental health and still marketed to them anyway. But "Facebook doesn't do enough to stop the spread of misinformation" is the most ridiculous narrative on something like this I've ever heard, and honestly I'm not surprised such a narrative got a 60 minutes special. The problems I have with Facebook are the real problems, like how it is designed to get people addicted and steal their lives from them, how Instagram is designed to instill envy so as to maximize use. These are real problems, bad problems. "Facebook isn't doing enough to prevent vaccine hesitancy" is not on anyone's radar that seriously cares about the effect Facebook is having on our societies, the only people hammering on this narrative are misdirecting at best, working a propagandistic angle most likely.
For me it ranges. Some people are opposed to the covid vaccine mandate, some people don't want it for themselves, some people (I think only one or two) are anti vax types, some have it but don't think others should have to and some think you're evil if you disagree that everyone should have to. Precisely what information people internalize on Facebook isn't the common thread there, the common thread is that it consumes their social life and has a near monopoly on their information availability and therefore worldview. Facebook's problems go beyond what information is available on it.
The government doesn't belong in the bedrooms of the nation, but somehow should decide which meme graphics two adults can share online. Lets Go Brandon.
Meta: Be warned that anyone who is anonymous and commenting on the validity of the whistleblower, may be speaking in the interests of facebook and spreading disinformation.
This story is on the current frontpage with 3 seperate threads, two even linking to the same outlet (NPR). When you go into the threads criticism of this call for greater government regulation of speech is met with personal attacks and other fallacious arguments. Might be just humans being overly excited humans, but this sure seems very organic.
My $0.02, but it seems like the moderation team (dang in particular) has been less active over the last 24 hours.
Usually there are reminders in large threads to click the "more" button to see all comments, but there was no such comment on the huge "Facebook-owned sites were down" post yesterday. They usually also merge duplicates and add top level comments with links to the other conversations taking place on the same topic.
The moderation team usually does a lot to reign in problem behavior, but there seems to have been a lot questionable comments making it through lately.
I have also noticed a lot of comments by users with "throwaway" in their usernames - seems like a shift in who is using HN and how they are using it.
She is not arguing for censorship at all. She's actually arguing that content-based censorship doesn't work because the AI algorithms are so inaccurate.
She is arguing against algorithmic content ranking, and in favor of chronological feeds, as well as other measures that do not attempt to judge the content itself.
For example, Standard Oil did not break any laws in its ruthless consolidation of the nascent oil industry. In fact, it exploited the law to allow it to grow into the monstrosity that it eventually became. In response, Congress passed the Sherman Antitrust Act in 1890 which subsequently prevented the actions that Standard Oil had used to consolidate the market.
There should be no question, that what FB is doing here, while not illegal, is highly dubious ethically.