Many are confusing "censorship" with "content moderation". The difference is very important.
Censorship is when you are prevented from publishing on your own platform. Content moderation is when a platform owner choses what can be published on theirs.
Twitter decided they did not want their platform used to spread what they considered to be Russian disinformation and propaganda, but they in no way prevented the NY Post or anyone else from publishing the story.
In the same way, when Hacker News and other well-run platforms remove or hide abusive and troll-like comments they are not infringing on anyone's 1st Amendment rights. A site does not lose the right to moderate content just because they become successful.
I am always impressed when people make these kinds of absolute statements about the primacy of private property rights.
Now, let's consider:
* Alice should be able to choose without restriction to whom she is going to rent
* Bob should be able to choose without restriction who's allowed to play golf at his club
* Carlos should be able to choose without restriction to whom his bank is going to lend money
etc.
I am not saying I agree with those statements. I am saying when you say "Twitter is a private company. They can choose without restriction who's allowed to use their platform," you are not saying anything substantively different than those statements.
In a universe where the ability to put your speech in front of other people depends on being able to post it online in a place where they might be able to see, there are trade-offs involved that seem to be conveniently shoved aside when it is about news you do not like.
I prefer the cacophonous chaos of everyone being able to tell me about stuff they deem important. One step further, I prefer to live a society that can learn to live with that without falling apart.
> I am saying when you say "Twitter is a private company. They can choose without restriction who's allowed to use their platform,"
That's a straw man. I never said anything like what you are pretending I said. They obviously could not ban users on the basis of race or religion or a number of other criteria.
But that's not really what's under discussion. The question is should Twitter (or other platforms) lose their right to moderate content when they become successful?
Your analogy is broken. Clever to try to compare Twitter to discrimination though.
In all of the situations, the principals have broad discretion to make choices about their property. Alice can turn away people with credit problems. Bob can require a specific handicap to play. Carlos can choose to not loan money to high risk professions.
They cannot say “no black people at my apartment complex”, “no golf if you are Jewish” or “no loan if you’re gay”.
But isn't the sort of sneaky racism exactly what people claim is such a problem today? There's not overt claim of inferiority or an explicit denial of opportunity, but yet there is still "systemic racism".
The analogy is that Twitter can't censor stories that benefit republicans, but if the "reasons" for censoring always seem to work against republicans and are applied inconsistently, then perhaps things aren't as they seem.
It can be, and the line may move. Sometimes “systematic racism” or similar arguments are thrown about without context, or with a local context that doesn’t make sense to others.
The republicans have a serious problem, and it isn’t Twitter. Blaming media outlets for not paying attention to things is a well worn if dubious path. It’s hard to give credence to with this story given the provenance of the information and obvious problems.
It is an analogy, and it's already established we aren't discussing law. The question is the principles, causes, and reasons at play and how they relate to other principles, causes, and reasons.
I think it would be helpful to forget about the "Republicans" and think about other people who might hold other opinions which may not be shared by the owners of one of the dominant platforms for announcing your views.
No one is saying that New York Times has to publish the story. However, if someone wants to publish a story and announce it to their followers on a platform, citizens need to think long and hard before agreeing to "whatever the shareholders want goes."
The fact that they do not have a product they sell to the recipients of information at a positive price makes it harder for competitive alternatives to gain any traction at all. So, the fact that once a platform like this achieves a certain reach it is unlikely to be effectively challenged means we need to think hard about how much power we want them to have in terms of shaping the conversation: Both conversation we like and conversation we do not like ... It is not sufficient to think in terms of your affinity to the dominant forces. You have to think about what kind of society you want to be in when you are in the minority.
One person’s chaos is another’s vitality. Every human on earth, by living the natural world, has chosen chaos/vitality at some level. We also “moderate” that entropy in lots of convenient and comforting ways.
Cybernature is a new experience. As a species, we are working out what level of moderation we would like.
Explorers tends to be rough-and-ready sorts, and so veer towards the vitality side of things. They know how to make the chaos work for them. For good and/or evil. Since there isn’t the downside of “instadeath” of the physical world, more risk-taking, and thus more chaos, may be justified.
Settlers tend to like fences, highways, channel markings, flight corridors...structure. There are usually eventually more settlers in any area, so their view prevails in the long run.
(You can be an explorer in one area and a settler in another...it’s not some permanent psychological feature. You can demand political content freedom from de facto pseudocarrier monopolies while being happy with a locked-down OS for self-driving cars.)
In computing, the barrier to entry for exploration is so low and it is so new (in the long view) that the percentage of settlers is pretty low, hence the cultural bias toward freedom (including content) rather than control in these forums.
[Personally, I trust the American people to curate their own content. I expect the educational system to educate the citizenry to be skilled enough in critical thinking to be able to curate the content presented.
But I expect the marketplace of ideas to police itself against systematic biases and inaccuracies, either with effective editing of its own content and/or in actively and effectively supporting a “loyal opposition” to its views.
Twitter seems to be doing neither, because it’s much closer to being a de facto common carrier monopoly than a media channel. Common carrier monopolies had been accepted and successful for a hundred years in the US, but only with heavy Federal regulation. The hip, cool alternative: break them up.
Great truths from Gen X, the generation to which you can’t easily mass-market.]
Twitter and Facebook are getting closer to: the post not delivering your mail, or the power company turning off your electricity despite paying your bill, or the phone company cutting you off after you say the wrong thing...
Content moderation is a form of censorship. Censorship isn't always bad. You can look up the history of the term if you want (hint: it shares the same origin as the term "census").
A closely related phenomenon to the pathological applications of censorship is the abuse and redefinition of language for political purposes like what you're doing.
Hunter Biden left his laptop at the repair shop and never paid for the repairs. After waiting the legally required 90 days, the repair shop became the rightful owner of the laptop and its contents.
Once they owned those photos, they had every right to publish them or do whatever they wanted.
The least the press could ask VP Biden would be, "will you pledge that your son will not be involved in any sensitive discussions or allowed access to any sensitive material?"
Even if there were no corrupt actions at all, Hunter Biden represents every kind of HUMINT leverage imaginable.
So if I sell my laptop that still has some photos on it, the buyer would own the copyright over those photos because they own the laptop?
If I deleted the photos from the laptop, but they used a recovery tool to restore them from the disk, would they still own the copyright then?
What if I have the same photo on two different USB drives, and I sell those to two different people, who owns the copyright then?
That doesn’t seem right. Copyright is not generally attached to a physical medium. You don’t lose copyright by losing ownership of the physical medium that holds a copy of the photos/source code/etc.
Oh yes, you're referring to the pictures of Hunter with a crack pipe in his mouth and things like that. You're correct. I took that to be just proof that they had the laptop. Everyone knows that Hunter has a crack addiction problem.
I think I totally forgot about that because of the censored sex tapes and nudes being published by Gnews which are way more salacious.
Content moderation decisions can adhere to this principle or not. Removal of viagra bot spam from a blog comment section is a clear example where content moderation adheres to the principles of free speech.
Removing content on a platform like Twitter simply because the Silicon Valley elitists/leftists that run Twitter don't like it is not in keeping with the principle of free speech (for the record I interviewed and got an offer from Twitter but declined). A good razor to use is to see if the rules are being applied in an uneven way. The principle of freedom of speech is an application of the idea of epistemic humility.
The point you were trying to make is "Content moderation falls under the first amendment" which I can't disagree with.
It's important to understand the difference between a moral principle and a legal instantiation of it (the first amendment). I hope that helps clear things up for you.
How does removing “Viagra bot spam” adhere to the moral principle of free speech? Why does the New York Post get special moral privileges that Bob’s Viagra Store does not?
This is the frustrating thing about this debate: one side tries to position itself as “anti-censorship”, when really that position only extends to cover speech they care about.
To accept the legitimacy of the principle of freedom of speech is to make a value judgement. If you want some pure criteria you can use to make content moderation decisions, then the purest criteria is no content moderation at all.
You can remove obvious spam (e.g. viagra advertisements posted by bots) and still adhere to principles of free speech. Advertisements posted by bots are not written by human beings acting in good faith in the pursuit of truth. And I mean that very broadly. Even trolling can be done in good faith.
> Advertisements posted by bots are not written by human beings acting in good faith in the pursuit of truth.
So if Twitter didn't believe that the New York Post article was written by people acting in "good faith in the pursuit of truth", it's okay to censor?
The intentions of the speaker and the value of their speech are subjective. It's fine to argue that we should be okay with censoring some thing you don't like. But at least acknowledge that you're not making an appeal to "the principle of freedom of speech" — you're just arguing for others to adopt your own boundaries of what's acceptable.
> you're just arguing for others to adopt your own boundaries of what's acceptable
By virtue of operating under the principle of respecting free speech you are making a value judgement and asserting a set of values. There is no panacea or pure criteria you can use besides no content moderation at all. I give the example of spam posted by bots as what I think is an obvious and non-controversial example of content that can be removed that doesn't violate the principle of free speech. Another example might be someone who simply replies "f* you" to every comment on a thread.
I don't know how to further explain this or qualify this. I think you just fundamentally don't understand or haven't actually reflected on what I'm saying.
You are correct, I don’t understand what you’re saying. I agree that there is no panacea or set of pure criteria other than not moderating at all. But there is also no singular principle of free speech beyond that, either — just different sets of tradeoffs that try to cultivate discourse that the moderator thinks is valuable. My issue is that people tend to present their own preferred set of tradeoffs as the One True Set that embodies the principle of free speech.
You mentioned “good faith” before, so let’s say that’s our operating principle: all parties must be speaking in good faith. Now consider that Twitter suppresses the New York Post because they believe they’re publishing in bad faith. Twitter is still adhering to our set of free speech tradeoffs. So why is this comment section full of people saying they’re not upholding the spirit of free speech?
It’s because they want Twitter to make a different set of tradeoffs. That’s fine, and I’m happy to have that discussion. But not when it masquerades as a discussion about whether Twitter suppressing this article is somehow incompatible with free speech as a concept.
It really comes down to whether or not you believe in objective morality. If you do then there is a sensical notion of free speech (or any other principle) even if no one person has a complete picture or understanding of what it is right now (although I would argue that we have a much better understanding of free speech than we do 6000 years ago). It is something we can strive for and recognize since it is an objective thing.
To reiterate my example, I would say a reasonable person living in 2020 would say that removing bot spam from a comment section is a content moderation decision that is in keeping with the idea of free speech.
Even if you believe in objective morality, I think my point about how the discussion should go stands.
Let's say that your definition of free speech is the objectively correct one. You then have to convince people to adopt that framework. You can't appeal to the objective definition; that's a circular argument. So you have to do it on the merits of the tradeoffs, like "bot spam is noise that detracts from a conversation".
To your example specifically, my own opinion is that it depends. I'm fine with removing bot spam from a comment section. But move down to more "infrastructural" layers, and I become less okay with it. For example, I don't think ISPs should try to block spam; they should be entirely agnostic to what content passes through their pipes.
"Content moderation" is about the speech someone running a website is willing to participate in. Requiring them to publish everything their users post would be what's called "coerced speech", which happens to be single biggest no-no of the legal concept of freedom of speech.
Forcing someone to say something against their will is considered far worse than preventing someone from saying something they want to say. That's how "warrant canaries" work: the government can stop you from disclosing the court order you received. But they cannot force you to continue saying that you never received such an order.
I think you're stretching the idea of compelled speech to the point where it loses its essential meaning. Compelled speech is when someone forces you to say something, not when you are prevented from disallowing someone to say something under their own name on your website/property.
When I comment on HN I do not comment as HN itself.
What you're talking about seems like an application of the idea of private property rights (i.e. kicking someone out of your house/office for saying something you don't like).
So this would make censorship a form of free speech.
We've reach a contradiction, so there must be an error in the chain of reasoning. My money is on "content moderation is a form of free speech" being wrong.
No, it’s only that people are using differing definitions of “censorship” and talking past each other.
Content moderation is, in fact, a form of free speech. Twitter (for example) is allowed to choose what they want to host on their website, and what they want not to host on their website. This is free speech – it’s their site, one can’t make them host what one writes. If choosing what to remove from their website is deemed “censorship”, then there’s no contradiction.
Content moderation isn't free speech, it's freedom of association, or in constitutional terms, freedom of assembly.
Consider that Ben Franklin wouldn't publish libelous or slanderous material submitted to his newspaper - clearly a form of "content moderation." He believed in the right of the people submitting such things to express them and would suggest they try elsewhere, but also believed in his right not to publish those views on his "platform."
I don't think anyone would accuse Ben Franklin of not believing in free speech, so clearly a reasonable person can square the apparent circle. Freedom of speech isn't the only right relevant to speech, and freedom of speech doesn't guarantee a right to speak on all platforms.
I'm not sure. What is the exact contradiction that you see?
Viewing moderation as another form as speech seems reasonable to me, or at least as reasonable as viewing a campaign contribution as a form of speech. Perhaps the actual issue is that "free speech" cannot be a guaranteed right shared by all, but is inherently a rivalrous good.
My money is on censorship being a form of free speech. Not a concept that I've heard before, or that (at a glance) makes any kind of sense to me. Can you walk me through that equality?
That is a really loose definition of censorship. Most dictionaries say it is the "prohibition or suppression" of content, which I don't think a content moderation is. It requires trying to stop the spread of the content itself, not just the content on a particular medium.
It's really not. Not legally or in the common use of the word.
If I kick an abusive troll off my message board that is moderation. If a judge orders someone not to use a computer for two years, that is censorship.
To conflate the two is to make the term "censorship" almost meaningless. Platforms all over the web filter spam, pornography, abusive behavior, off-topic posts, etc all the time. It's not censorship.
Similarly, if Twitter was legally forbidden from marking tweets with their fact-checking tags, that would be censorship.
It's a matter of scale. As the article says, the current Democratic Party Alliance for suppressing damaging stories is so broad, that even though you can publish your story, the chances of reaching its intended audience at scale are slim.
Also - if Twitter gets to decide whats get published on their platform, they are an editorial organization - something they've vehemently denied they are in the past.
No, prior restraint is a form of censorship where you are prevented from publishing on your own platform. "Content moderation" is a euphemism for censorship that you agree with. "Content moderation" is to platform censorship as "prevention of material support for terrorism" and "the maintenance of the public order" is to prior restraint.
Twitter did not decide that in good faight. First, because they are used as a public platform, like it or not, and as such they are not expected to moderate anything. People follow or block what they want. And second, because they are biased in their censorship and are not forthcoming about it.
> as such they are not expected to moderate anything
When even the president's tweets have been squelched or tagged, I highly doubt that anyone reasonably familiar with Twitter over the last couple of years would have that expectation.
I would say you're proposing a false dichotomy. There are 4 options or more.
In the case of Twitter, in order to maintain their "good samaritan" the extent of their "content moderation" is quite specficic.
any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.
The NYPost article was damaging a political opponent to be sure. It was not obscene, lewd, filthy; though clearly twitter allows porn so these are immediately ruled out. It wasn't violent by any means.
Was it harassing or otherwise objectionable? Well I think there's some good debate possible but that debate is cut off immediately because the "content moderation" must be done in good faith. This very clearly is violated.
So Twitter did not content moderate as per the laws they must obey. Therefore we now have 2 options. Twitter loses platform protections and becomes directly liable for any and all content they publish aka any anonymous person posting. OR they stop their censorship.
Doh. By the same logic China (or other oppressors) is not censoring anything because people can travel to Vietnam (or Japan for that matter) and they can tell anyone what they want.
You can call it "censorship" or "content moderation." I don't care, the distinction is meaningless for this conversation. The only meaningful argument is what Twitter (for example) CAN do (legally), and what they SHOULD do (morally).
Clearly, Twitter is legally in the right to suppress this information, because it's their platform. They can set the content rules as they like, as arbitrarily and capriciously as they like.
More of a judgment call, but I would argue that Twitter is morally in the right as well. This is obviously a case of carefully engineering disinformation, using America's value of freedom of speech against us. If they don't suppress it, Twitter is actively aiding bad actors, with very serious consequences. Twitter does not want to be in the business of content moderation, but in this case the information is question is so obviously falsified; it is spread so obviously in bad faith; and its consequences are so obviously dangerous to the nation, that they are justified in taking a moral stand against it.
Good question. The answer is yes. Aside from the factual basis (or lack thereof) of the information in question, there's the question of how it's been used. Giuliani, obviously the bad actor of this story, has been sitting on this laptop for a year. If he wanted to have a serious discussion about Biden's alleged impropriety, he should have released it early. At this point in the election cycle, it's a blatant attempt to grab a news cycle and force voters to a bad decision before the dust has settled.
I support a total blackout of political stories in the run-up to an election.
[Edit: rephrase to clarify that my position would not change.]
Trump's own government warned Trump that Giuliani was being targeted. [1] Guliani met with Derkach in Ukraine explicitly to get dirt on Biden - incredibly in the middle of impeachment surrounding Ukraine.. Derkach is a known Russian agent (recently placed on sanctions list - the timing must be noted as well). [2]
It's been reported that the Biden emails were already being pitched around the time Guliani was in Ukraine - which throws into question the timeline of this computer repair story. [3]
Burisma emails were also hacked, and it was widely reported that Russia was going to use them for an 'October surprise.' Hacking emails/iCloud etc is a proven method employed by Russia/GRU many many times for similar political interference over the last 4 years. [4]
Russia is also known to have mixed false materials into legitimate hacked materials in FR-17 - which is a reported reason why FB/TW/the legitimate press has refused to spread this [5]
It should also be noted that the Trump campaign shopped the story around and not even the Journal (and evidently Fox News too) wouldn't report on it. [6]
It's been widely reported that the FBI is currently investigating. [7]
Though personally I call F U on Trump's DNI going on Fox News while the FBI sends a 'won't break longstanding precedent to comment' letter to Congress. [7]
While not the same as a reciprocal 'Comey disclosure' at least someone is leaking enough to the press that we know the above. Ironically let's hope for more in the the way of Comey -> Richman -> NyTimes
an addition. while not fact, Putin going to the press to 'reject' this story feels very very much in line with trolling and his underlying effort to just make one question what is real and mess with us [8]
I don't know about Russian involvement, but it was clearly a right wing attempt to meddle in the election to achieve a last minute bomb on their opponent like in 2016.
If something is going to be suppressed as Russian propaganda, there better be evidence that it is Russian propaganda. Otherwise the suppression itself is propaganda in the other direction.
Twitter's objection was that the New York Post article contained hacked or illegally obtained material (personal intimate photos, etc.). Publishing those materials is a legal grey area.
No, if if the content is true and in the public interest, there is really no question about the legality of publishing it in the US: it is 100% constitutionally protected.
Twitter’s initial justification for removing the “hacked” material is completely at odds with their past decisions to leave up hacked/illegally obtained materials like the Snowden docs, Trump tax returns, Panama papers, etc. Of course they have since changed their stance.
That's exactly why it's a legal grey area. We don't know if the content is true or forged. Publishing stolen content is illegal unless you can prove it's in the public interest, and even if some of it is, it's doubtful the personal photos are.
Even if the content of the leaks was fake, which we have no evidence to indicate and none of the key actors have really claimed (as opposed to the story about how they were obtained), it would still be legal to publish it because the public officials involved mean it has a clear public interest.
There’s the possibility of civil damages if it amounts to defamation, but that’s quite a standard to reach. Such damages would fall on the publisher of the story, the NY Post, but not a platform like Twitter (per section 230 safe harbor immunity). There is no legal gray area here.
This is the same reason why those who publicized the now widely rebuked Steele Dossier are not facing legal consequences for it (including Twitter), nor should they.
Of course your original comment said nothing about the accuracy of the reporting and instead only made the claim that there was a legal gray area around publishing hacked materials like these, which is simply not true.
> the public officials involved mean it has a clear public interest.
It's clear only to you. You cannot normally legally publish hacked or stolen photos of celebrities. The personal, intimate, or family photos of Hunter Biden almost certainly do not meet the bar for public interest.
Twitter is not liable here. But as a private company they're allowed to have a policy against linking to hacked or stolen material on their platform. It's the NY Post that's in a grey area.
> Of course your original comment said nothing about the accuracy of the reporting and instead only made the claim that there was a legal gray area around publishing hacked materials like these, which is simply not true.
My original comment was in response to someone saying Twitter blocked it because it was Russian propaganda, which is simply not true.
That's circular reasoning. According to Twitter's statement, they blocked directly linking to hacked materials (which includes personal, intimate photos of Hunter Biden). They haven't suppressed any other discussion on Hunter Biden or the contents of that laptop.
Maybe you don't believe Twitter. Maybe Twitter suppressed the story for political reasons and you feel that proves the hacked content is of public interest. But if Twitter was honest then their policy is just to not allow direct links to any hacked or stolen photos. It doesn't mean they're important.
How are personal photos of Hunter Biden with his family important to public interest?
I don't think anyone is sure what it is yet. A common tactic in misinformation campaigns is to mix authentic (possibly hacked) photos with forged material. The authentic material is added to make the leak seem more believable.
By fabricating a story to be released exactly on October 15 where there's not enough time to verify if it is true or not.
We are here on Hacker News, are you certain that a legally blind person in Delaware. would be the guy to fly to all the way from California to repair water damaged laptop? If you don't know how such repair looks like, I recommend you to search Luis Rosmann on YouTube (actually cool to watch anyway).
I initially thought that the repair maybe involved reinstalling windows or removing viruses (which you could get assistance from text to speech tooling), but it was a water damage.
I wasn't born in US, but I'm an US citizen. It's funny you're accusing me being a foreign agent, when your account is only 6 months old and only talks about politics here.
As far as I know the Biden residence was sold few years ago, and Hunter lives in LA, but could be wrong, it's not possible to find definitive answer. But he has a house and his son was born in LA so high chance that's where he lives.
I also have hard time to believe that someone who makes $80k a month would care about laptop repair, and even if he did, wouldn't have someone on payroll (perhaps IT person from his company) who would go to his house to get it fixed.
Also, I still can't see how a legally blind person (I know legally blind doesn't mean completely blind) can do a water damage repair.
There are so many other holes in that story, but even if ALL of that was true, I don't see how actions of a 50 year old man implicate his father. There isn't any "smoking gun" in relation to Joe Biden. So what was the purpose of this story again?
> censorship: the suppression or prohibition of any parts of books, films, news, etc. that are considered obscene, politically unacceptable, or a threat to security
There's no sane way to interpret twitter's actions as anything except censorship. The fact that they're legally entitled to commit this censorship doesn't make it something else.
To have a sensible conversation, we have to acknowledge what the alleged Biden emails are: at best, stolen information; and at worst, intentionally manipulated or fabricated information. In either case, the information is being released now for the explicit goal of altering the outcome of the election in favor of those who are releasing it.
We haven't had this issue in history before: it's never been so easy to reach so many people with such bad information. This places intermediaries, including Twitter, in an awkward situation: they can "censor" by restricting access to intentionally misleading information; or they can release it, and thereby become pawns of the bad actors who propagate it.
As has been pointed out, this is a moral issue. We should step pretending that all censorship is bad, and that compelling intermediaries to publish literally everything is a moral good.
Who the hell cares if the information was stolen? Jeffrey Epstein's little black book was "stolen information" - do you object to using that information to make an indictment?
> intentionally misleading information
Crazy how a credible leak is "intentionally misleading" when it's politically inconvenient.
Censorship is when you are prevented from publishing on your own platform. Content moderation is when a platform owner choses what can be published on theirs.
Twitter decided they did not want their platform used to spread what they considered to be Russian disinformation and propaganda, but they in no way prevented the NY Post or anyone else from publishing the story.
In the same way, when Hacker News and other well-run platforms remove or hide abusive and troll-like comments they are not infringing on anyone's 1st Amendment rights. A site does not lose the right to moderate content just because they become successful.