Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The problem with the tech hearings (zeynep.substack.com)
75 points by tosh on Oct 29, 2020 | hide | past | favorite | 143 comments


The problem with tech hearings, and generally most hearings with subjects at all related to engineering/science/math, is that Congress is old and actually trending older on average (at least as of 2018, we'll see soon this cycle).

The hearing yesterday was always going to just be a political show due to the current election but remember back to Zuck in 2018 regarding Cambridge Analytica or earlier this year with Zuck, Bezos and Cook. These congressmen are just clueless about tech entirely and apparently aren't seeking expert help in advance of these hearings.

As a big tech employee myself, I absolutely believe that a lot of these big tech companies should be broken up and I can clearly see things that we are doing that aren't good for the consumer. Congress just isn't asking the right questions and isn't asking for the right information.


This isn't about tech at all. I agree that Congress is out of touch with many tech challenges, but it doesn't apply to the issue under consideration. You don't need to be tech-savvy to understand or consider the implications of media control over public speech. Society has struggled with this issue since the dawn of mass media. It's fundamentally the same concern we faced with yellow journalism a full century ago, and back to the dawn of man in many ways.

I don't trust our politicians when it comes to complicated tech issues. I may not agree with how they handle current social media issues. But I don't think tech ignorance is limiting their understanding of this issue. It's a social issue and a media issue. It's not a tech issue.


They don't need to understand the tech to recognize that there's an issue, but they do need to understand the the tech if they want to have any hope of regulating social media and fixing the issue that they have identified.

They mostly seem to give the impression that Dorsey/Zuckerberg, etc, could just tell their companies to do a thing and then it magically happens.

They at least need to understand the tech well enough to recognize that fair, effective content moderation for billions of people is a more difficult and more complicated problem than any of them have attempted to solve.

I think social media companies could do a better job of being transparent and working to reduce political bias, but I do appreciate the challenge that they are facing.


Yes it's primarily a social and media issue, but the novelty is the impact that technology has on these things.

In short the velocity of everything has increased by many order of magnitude. Anyone can publish anything quickly and share across aggregators instantaneously. The distribution of that data from the major platforms is completely opaque, unlike newspapers or tv which had known distribution and self-evident editorial decisions that apply to the entire audience all at once. In the new world, there can be infinite content and infinite eye balls all seeing different slices of it. It's all done by algorithm and no one except data-savvy internal staff have any idea how many people are seeing what and when.

I agree this can all be explained without much technical depth, but two problems with congress: first is that they are out of touch and still think in terms of the disappearing era of broadcast network news having massive monolithic reach and corresponding journalistic standards and ethics conferred by their unassailable moat of limited media options. Second, you can't legislate this if you can't reason about the second order effects due to the economics of how tech companies are run (ie. editorial in any old-school sense of the word simply does not fit any social media companies unit economics) as well as the information economy (ie. people prefer watching things that confirm their biases, and there are unlimited media options with many young people having no idea what a television schedule is).


Yes and no? Is there not an argument to be made, particularly here where the focus is Section 230 and mostly just FB/Twitter, that it's such an issue because Congress allowed these tech companies unchecked growth and consolidation? Outside of more typical news (CNN, Fox, MSNBC, etc), what you see on the internet has come down to the whims of Zuckerberg and Jack Dorsey. Even Instagram is still just Zuckerberg. There's no room for competitors anymore and I think a large reason for that is Congress's continued inaction.


Many members of Congress are heavily invested in tech (though, that's somewhat a consequence of the bifurcation of the S&P 500 into 'big tech' and everything else, with respect to market capitalization). It is somewhat in their interests as owners to have ineffective hearings.

They are fully capable of getting experts involved before hearings to ensure they ask the right questions-- even if the Congressperson isn't all that technical themselves. And they know this. They aren't unintelligent people.


This isn't about the actual underlying issues. It is theater. It is campaign trail talking points. "I'm tough on Big Tech". Once they leave the hearing they are taken out to dinner by lobbyists and accepting donations to the campaigns. Politics isn't about red vs blue as America has been misled. It is a game where the powerful exploit the weak.


I'm going to swim upstream and say that age, in and of itself, isn't a problem, but complacency and self-satisfaction are. You can be 80 and learn something new if you really want, assuming no mental problems, and you can be 20 and think you know everything there is to know and be resistant to learning.

The subsidiary problem, which can be remedied as long as the people want to learn and think they aren't too good to learn new things, is familiarity: It is hard to know what regulations will mean in a practical sense if you leave all your social media accounts to interns and suddenly have to bootstrap your way up from email all the way to Facebook, TikTok, Twitter, and Instagram.

Being old can be a handicap when learning about technical fields, but the old have no monopoly on smug, self-satisfied idiocy.


Absolutely, I was referring to age more as it relates to the shape of the world and technology as they grew up rather than purely a hit against mental capacity (though for some members of Congress it's definitely both).

For example, Orrin Hatch asking Zuck in 2018 how they make money makes you assume that he doesn't use Facebook. Pretty sure anyone born in the last 30 years could immediately tell you that it's through selling ads.

The younger generation has grown up as big tech has and has a better understanding of the control and influence of these companies on their day to day lives that the older generation doesn't always understand.


I think age matters not because of technical abilities, but instead because the age group that is active on FB is different from the age group representing them in those hearings.


FB has A LOT of old users.


Let's not be ageist. Congressional leaders can be made aware of and informed on the regulatory and sociological impact of tech...if it was a priority for them.

We DO have a problem of leaders not being well informed on many topics they make laws about. But this is also true for finance, healthcare, foreign policy, and yeah tech.


Sidebar: is there an age limit for ageism?

When people complain about how old congress is, it's not like saying a 50yo can't code. It's more complaining about 75, 80 year olds who live off of name recognition rather than anything they actually do anymore, and some of them are clearly losing it. They could retire and live a great life, nobody's trying to take bread out of their mouth.


Unlike those other topics we still have a culture where they wear their technological illiteracy as a badge of honor.


A lot of the background work is handled by the staff, not by the elected members themselves.

And the staff are typically in their 20s or 30s, and use tech like anyone in their age group.

The elected members may not be great questioners, but the recent antitrust committee report had staff fingerprints all over it.


What questions should be asked in hearings like these, spfink, to illuminate the reasons why big tech companies should be broken up, in a way that might resonate with voters?


Senators Ron Wyden, Maria Cantwell, Orrin Hatch and Mark Warner are extremely tech-savvy.


Parenthetically, Orrin Hatch is no longer a senator since he retired 2 years ago.

I can't comment on whether he was tech-savvy or not.


The inability of congressmen to understand even the most trivial of technology considerations is irrelevant. These are business, so logically follow the money. In almost all cases the areas of concern are dictated by advertising business.

That being said if congress had any teeth at all they could instantly solve this technology problem by simply taking back the legislative protections that allow online advertising to thrive. If any of these companies had to operate without these special protections and thus compete with print media with equivalent protections they would immediately whither away.


There are no special protections because you don't need one to not be sued by someone for a wrong party! If my car is vandalized I don't get to just sue Bill Gates be he is convenient and known!


Your example is absurd because it doesn’t exist.

Section 230 and DMCA are special protections that provide blanket immunity from any lawsuits. The protections intentionally exist to shield service providers but indirectly allow online advertising where it could not exist otherwise, there they are special protections.


I think the article is good and does a good job outlining how the legislators have basically abandoned their regulatory position for showboating.

I disagree that we have lost anything substantive here though, or that new regulation around this might even be necessary; When in American history would all of the major book publishers refusing to print someone’s words mean the publishers should be made legally liable?


> When in American history would all of the major book publishers refusing to print someone’s words mean the publishers should be made legally liable?

I'd say that tech companies a. have more power than book publishers, b. are more concentrated than book publishers historically have been, c. are covered by section 230 safe harbor, unlike book publishers

Moreover, why are we constrained by American history given that the challenges we face clearly are not?


Meta: how do you square this position with your usage of HN? Use social media only as much as necessary to bring about its demise? Regret its loss but feel it's socially necessary? Believe YCombinator will actually accept liability for everything posted here and it'll be fine?


Accept that I am going to be complicit and have to engage in things that are structurally problematic.

The same way I buy gasoline, knowing that I am indirectly complicit in resource dispossession just by participating in that market.

> actually accept liability

My end goal is not in making HN liable for comments here, but I do think platform regulation is needed and if 230 has to be used as the "stick" to incentivize compliance, that might be necessary.


What is the shape of that regulation?


That’s a great criticism; I look to history to try and understand the consequences of principles in practice. But of course changing the details can change everything.

The question then is, does the increased power and concentration move the internet companies to the category of something like a utility that needs public oversight, w.r.t. moderating content under section 230? To me that seems strange, other people’s attention does not seem like a utility. There may be other things large tech companies do with their power that mean they need to be regulated like utilities, but moderating user generated content so that users don’t leave or so that ads can be better sold over it doesn’t seem like it fits the bill. It’s not clear to me what is broken on section 230 specifically; I can’t separate a legitimate social ill from the politically motivated attacks.


> When in American history would all of the major book publishers refusing to print someone’s words mean the publishers should be made legally liable?

Major book publishers are legally liable, which is fine because they decide what they publish, e.g. they refuse to print some people’s words.

The tech industry was given immunity from that liability, with the idea they acted more like a common carrier than a publisher.


> with the idea they acted more like a common carrier than a publisher.

This is pure invention.

Information services' closest publishing analogue is letters to the editor. In print, the publisher has a far lower inbound volume of letters, and has to satisfy a less stringent SLA (letters are published daily or weekly). Publishers are able to moderate submissions effectively. Thus they can be held liable for user submissions they publish, even though they didn't write them.

Information services cannot do this. If they adopt the moderation model of a traditional publisher, their usefulness is lost because:

a) you'll need way more manpower to moderate (or the service cannot handle a high volume of submissions)

b) you'll lose the low-latency of an information service (i.e. nobody wants to wait 10 hours for their tweet to be posted).

Given these facts, treating information services as publishers by making them liable for what their users posted would have killed an entire industry in the cradle. Which is why Congress wrote the safe harbor into law. It has nothing to do with them being a "common carrier". You can read the law and it explicitly allows them to perform as much moderation as they wish.


You're railing against claims I didn't make.

Section 230 explicitly allows the tech industry to perform moderation. It was needed because ISPs were being sued for user-generated content and if the service had not moderated content they were found not at fault (Cubby, Inc. v. CompuServe Inc), but if they had moderated their user content they were found to have editorial control and thus be a publisher and legally liable (Stratton Oakmont, Inc. v. Prodigy Services Co.), this is directly relevant to whimsicalism's analogy.

A Wikipedia summary: "The court held that although CompuServe did host defamatory content on its forums, CompuServe was merely a distributor, rather than a publisher, of the content. As a distributor, CompuServe could only be held liable for defamation if it knew, or had reason to know, of the defamatory nature of the content. As CompuServe had made no effort to review the large volume of content on its forums, it could not be held liable for the defamatory content."

Section 230 was Congress clarifying which of those ways internet services should be treated. My phrase "with the idea they acted more like" is descriptive of what ISPs are / how they are classified, not prescriptive or Congress demanding how they act. It does sound like I should have said the idea was they're "more like a distributor than a publisher", instead of "more like a common carrier than a publisher".


Ah gotcha. We're on the same page.


Thank you for the clarification, I'd note this is a bad analogy but the edit window seems to have passed.


These Silicon Valley companies like to play both sides of the fence for sure. They don't want to be held liable for the things people post on the distribution infrastructure they provide. Fine - I get that and it makes a lot of sense. However, even having that protection, they now want to control what is shown on their infrastructure. The whole reason they have this protection is so they CAN let anyone post without fear of litigation.

I see it as a one-or-the-other kind of thing. Choose one of the 2 options: Censor absolutely nothing thats not patently unlawful OR censor things to keep yourself out of legal trouble because you ARE liable for the words distributed by your infrastructure.

Choose one or have a choice made for you I say.

More likely and what I personally believe happened; Hillary lost the 2016 election and the Democrats went nuts about Russian interference via social media. It HAD TO BE THE RUSSIANS as its simply impossible that our system of elections could produce a result where she didn't win. The social media elite realized "HOLY SH*T OUR PLATFORM CAN SWAY ELECTIONS??!!! COOL!!!". On that discovery they decided it was their moral obligation to sway our elections THE RIGHT WAY THIS TIME.

Personally, I dont care who, I dont care why, I dont care if its "a very good reason". All forms of censorship have no place in a free society. Not by the government. Not by private enterprise. Sure as hell not by the unelected likes of Jack Dorsey and Mark Zuckerberg.


> The whole reason they have this protection is so they CAN let anyone post without fear of litigation.

No, the reason they have it is so that they can moderate without fear of litigation; before 230 unmoderated content generally did not expose companies to liability unless they knew of unlawful content (distributor model), but any attempt at moderation made them liable for everything on the site, whether or not they had specific knowledge (publisher model); 230 specifically was passed to negate liability as a publisher for user submitted content even if the platform engage in moderation.


Could site owners freely decide who's allowed to post to their site without 230? Seems to me the ability to ban unwanted users would enable a decent degree of moderation without direct control over what's posted. Wouldn't have to be all-or-nothing, you could always give an offending poster the opportunity edit or remove a post to avoid a ban. Would that be a viable way to moderate a community?


Fair point and well said. I would call it a stretch to lock the account of a US newspaper until said newspaper removed a story it "printed". I know anyone can post a link from someplace supporting nearly any position but can we agree the Business Insider is at least not the national enquirer?

https://www.businessinsider.com/jack-dorsey-ny-post-remains-...

Moderation is a long way from outright ban-hammering a US newspaper over their posting of a news story. What Twitter specifically did here more or less makes them an unofficial editor at NYPost. Thats not acceptable.

Ask yourself this simple question - if the roles were reversed and Twitter was run by a Republican and they locked the Washington Post's account because it printed an unfavorable article about Donald Trump - what would you call that action?


Two media companies having a dispute and one stopping carrying the other for a while happens all the time in the TV industry. I don't see much of a connection to 230 at all on this issue. Once we wade into this particular nonsense we have to bring up Twitter's own first amendment issues - you're gonna force them to put certain things on their own platform? - and get into a whole other can of worms.

The real action to take here is antitrust type stuff to limit the size of any single media outlet or platform anyway.


It’s certainly a call to action, but all too often each reply in a thread like this is just coming at it from the legal side when the parent was coming at it from the moral side (or vice versa). This is one of those situations - the comment you’re replying to is only explaining how it works currently.


The thing about the moral side is that we already have a bunch of jurisprudence around the 1st amendment.

Yes, companies aren't bound by it legally, but it'd be a great moral guide to follow if that's what they're after.


Why should someone running a website be able to moderate without fear of litigation when someone publishing a print magazine is not able to moderate without fear of litigation?

Did 230 say anything about that or did the law have no stated reason for that difference?


Print magazines aren't attempting to moderate 2-10 novels per second either. Isn't it much more commonplace for print magazines to exercise editorial control over every piece of content, conferring on themselves in some sense (at least as far as these laws are concerned) a degree of accountability?


It is not just much more commonplace but obligatory for print magazines to exercise that control. A print magazine can't say, send me slander, send me libel, send me copyright violations, send me terrorist threats, if they agree with my personal political slant, I will print them all no questions asked.

Why can't a print magazine do that, but a website can? Did 230 say anything about why those are treated differently?


> Why can't a print magazine do that, but a website can? Did 230 say anything about why those are treated differently?

The law itself doesn't, because laws generally don't. The discussions around the law offered a number of reasons; one basic deontological idea, as I recall, was that a website moderating user content was functionally more like a choosing which third-party publications to offer than actually publishing. But probably the more significant argument was consequentialist, that if companies were liable for everything if they tried to moderate at all, the into viable large scale sites would be completely unmoderated, with no attempt to preemptively identify illegal and offensive content. And that's why it was included as part of the Communications Decency Act.


It's because the user on a website is understood to be representing themselves, not the organization that created the website. The intent of section 230 was to codify this distinction in representation into the law.

On the other hand, a print magazine hires employees to produce a bespoke product sold to the public, so whatever they publish in the magazine is understood to be a representation of the magazine company. In fact, the owner of a website would indeed be liable for posting infringing content if it were understood that the site owner was representing themselves (e.g. if I posted stolen photos from your laptop onto my self-hosted blog).


is a social media site really comparable to a newspaper? a newspaper has an editor who decides what goes into it, there's no "user generated content" outside their control


> I see it as a one-or-the-other kind of thing. Choose one of the 2 options: Censor absolutely nothing thats not patently unlawful OR censor things to keep yourself out of legal trouble because you ARE liable for the words distributed by your infrastructure.

> Choose one or have a choice made for you I say.

This is silliness.

There's a ton of legitimate reason to have moderation standards, community guidelines, etc, for user generated content beyond "patently unlawful."

To say that any forum that wants anything stricter as a moderation standard must be liable for any malicious actor's behavior is to doom so many online communities.

You're so focused on your extreme "any censorship is bad" "principle" that you've lost sight of the nuance of the real world. A church, for instance: should they be able to have a discussion forum with standards around profanity without becoming liable for anything anyone in the world might post? It's not illegal to swear at everyone...


If people need to be 'controlled', then why can't AT&T use voice recognition to do exactly what Twitter does, and if someone says something offensive in a phone call or text like "There are two sexes" or "All lives matter" then the person's phone can just be disconnected right?

This is what Twitter does. Permanent ban for even the slightest offence.

No, the solution is to make all censorship illegal on large platforms, because otherwise the unsolvable question of "whose standards" it is that determine "offensive" arises. Right now Twitter employees consider themselves this high court and they will ban you based on their own political leanings, which only 50% of the country agree with.


If someone feels that HN's moderation policy is such a social ill as to warrant YCombinator's destruction, do I still have to abide by it when replying to them?

Put another way, why are you discussing this here and not on 4chan?


"If someone feels that HN's moderation policy is such a social ill as to warrant YCombinator's destruction, do I still have to abide by it when replying to them?"

^^ Your words - not mine.

When HN takes action to remove unproductive or incorrect comments on posts its called "Moderation".

When a media organization "Moderates" a US newspaper by locking their account over a political story its called "Censorship".


HN moderation is an example of a tech company controlling what is shown on its infrastructure, and also depends on Section 230 protection.

“Unproductive and incorrect” is not patently unlawful, so censoring it would open YC to liability for everything posted here, under your proposal.


Explicit partisan bias is protected speech; it's protected speech in newspapers, radio, tv, and also websites. I think it's interesting that people only want to rescind those protections for websites, and only a specific handful of popular websites...


> All forms of censorship have no place in a free society. Not by the government. Not by private enterprise.

People pay good money with good reason to have things curated for them. The New York Times decides what to publish and your local TV station decides what to air. That they choose to not print your article or letter to the editor or air your content is not out-of-bounds censorship. Why must Twitter or Facebook or YouTube be held to a looser standard? Or should the NYT print it all as well?


"Anything not explicitly illegal MUST be left up" is an immediately self-destructive moderation policy; your site fills with spam, goatse, snuff images, porn up to the borderline of legality, as well as the ridiculous lies about public figures that everyone wants to protect.


I suspect the real problem is that we have no property rights online. If channels, accounts, online storefronts, followings and content were recognized as belonging to users, it would substantially change the dynamics around deplatforming, as well as anti-competitive activity by the majors.


I think this is why we need more federated social media, based on protocols, just like I can own my email account (by getting a domain, for example) and I can move it accross providers and interoperate with other people on other providers.

I would be able to choose the provider that I trust the most (or self-host) and even move to another provider, while still being able to participate in the global conversation.

This way we'd also have different fronts (apps, websites, clients, etc.) to this network, each of them editorializing it however they please.

I'd own my account, my posts and I'd be able to choose which front I want to use to interact with the network.

Users would be responsible for what they post and fronts would be responsible for what they show, if they choose to filter and/or promote content.


That's something very different...that's just saying, you can be your own provider if you want. That won't actually improve rights for most people, kind of like being your own bank is not a remedy for most people. The important thing is improved rights to what's in the account, and improved rights to damages when it is credibly harmed.


My point is that if you are free to change provider without having to quit access to your existing network (same username/handle and connections), then you are free to backup your data and move it around to different social media providers.

At that point you are not at the will of one centralized company. If you feel like your rights are being violated, you can change provider.

Following your parallelism with banks: if I have my money in bank A and I don't like how it is operating I am free to change from bank A to bank B.


Yeah, if it's provider portability you're talking about. I misunderstood your post initially to be a suggestion that people should self-host everything. Greater property rights in our online accounts would be de jure decentralization since the ability to take it all with us -- content, contacts, &c -- would be part of that.


> If channels, accounts, online storefronts, followings and content were recognized as belonging to users, it would substantially change the dynamics around deplatforming, as well as anti-competitive activity by the majors.

Why should these belong to the users when the software hosting them, the servers running the software, and the infrastructure running the servers are all owned, operated and maintained by a third party?

That's like saying I should own my table at a restaurant simply because when I walk in and sit down, they serve food to me there. Using a service has never entitled you to an ownership share of the establishment.


It's like saying you own the "business" behind the restaurant -- brand and so forth -- even if you're renting the space.

It's kind of like how you own the assets in your bank account and have a right to the returns on them even though the building, the accounting software, the IP and internal best practices supporting it are all (usually) owned by the bank.

We need to recognize the property interest of users in what they built -- in their content, following, online store, &c -- as a separate matter from the from the platform's ownership of its infrastructure, best practices and organization.


Except, even if you do own the assets in your bank account (I've seen a lot of sources claiming that, at least in the US, you don't but I have no idea how accurate that is) you still signed a contract with regard to that account, agreeing to abide by certain terms including rates and fees.

And on every social media platform with which you have an account, you agreed to terms of service which very likely grant the platform a "worldwide, non-exclusive, royalty-free license (with the right to sublicense) to use, copy, reproduce, process, adapt, modify, publish, transmit, display and distribute such content in any and all media or distribution methods now known or later developed." (copied from Twitter's TOS, but every other platform is basically the same.)

data and infrastructure are not separate matters. Just as, with a bank, your account is a liability unless it makes them money, with a social media platform your data is a liability unless it makes them money. And in exchange for a free platform (or free checking and a pen) you gave certain rights of ownership away. If you want to "recognize the property interest of users" you have to recognize it in that context.


The terms we of bank accounts are not merely the generous offerings of the banks, but the result of a long process of interplay between consumers, banks, and regulators. The ToS we have for online service providers doesn't recognize enough of the users' interests to counterbalance the power and interests of the major platforms.


Bank accounts already have contracts and terms regulated for the actual legal obligations.

We "need" nothing else for shouts to the void as nobody gives enough of a shit to set up and use more guaranteed terms and any arguments are just as timely, sane, and useful as saying we need to make blood oaths legally binding and have notaries for them. No we don't we really don't.


The terms we "already" have for bank accounts are due to a long interplay between consumers, banks, and regulators.


But they don't belong to users. At best users are leasing the space (for free). There are basic agreements around behavior which are conditional for use, and generally reasonable.

The problem is the grey area, and people's political views and biases strongly affect what they think is or is not following the rules. That makes the rules extremely hard to enforce in a manner that keeps people satisfied.


You could say the same thing about bank accounts! People are just leasing the space for free (&c) but the law recognizes an ownership interest in the material in the account, a right to the returns, &c. Banks do have certain rights with regards to the account, as well -- that's how they make any money at all -- but it's much more granular than the situation we presently have with online accounts.


I don't know if that's a good analogy. Banks can also shut down your account at will; they have to give you your money but they don't have to continue providing any services. And banking is private but social media is public. It's reasonable that if a social network closes your account they should give you a backup, but forcing social networks to continue hosting offensive material in public because the user "owns" their account sounds untenable, especially since the user doesn't pay.


No one is arguing that they should be forced to continue to host offensive material. What we're talking about, is giving people standing to take Facebook to court if their account is suspended for posting something that is true and substantive.

There are many examples of this in the recent past, but perhaps the most surprising is those Facebook accounts who posted material indicating that Kamala Harris's record on gun rights in California is not encouraging for gun owners (not difficult material to fact check -- she activated the handgun roster) and found themselves suspended because, according to Facebook fact checkers, "Kamala Harris is pro 2nd Amendment".

More controversially, it is now being argued that Twitter's recent suspension of the New York Post was entirely baseless: that there was no reason for the social network to conclude that the news agency was reporting the products of disinformation.

The way property rights usually work, you aren't able to force people to treat you fairly in a particular situation, but you can take them to court over it and be made whole. The system certainly functions as an aggregate encouragement to treat other people fairly but only in rare situations are you going to see people forced to perform in a particular way. People would still lose their accounts but expanded rights would give them a way to pursue a remedy.


That is a nonsensical and entitled notion of property that stated an entitlement to another's server.

Channels, accounts, and online storefronts don't stay online of their own accords. They require hardware, software, and occasional power.

If I decide to unplug an old PC that was acting as say a dedicated game server would I be depriving many people of property? Which would make my own ownership of the server a lie?

The only way that would be sustainable would be a contract to pay the host. That is basically existing contract law with a self-aggrandizing name attached to it. It contributes literally nothing new conceptually.


The property interest in one's channel is not actually an entitlement to be on another's server, anymore than a property interest in one's car is a right to put it in a certain parking garage. The property interest kicks in when the car is parked in a garage and then damaged, lost, sold, &c.


Before we start socializing the property rights of websites, can we do housing and medical infrastructure first? Especially all the companies with exclusive rights to life saving drugs.


It's not socialization...if anything, it's greater privatization of property. Think about the situation that obtains with a bank account: the law recognizes that the account holder owns the content of the account, without them having to own the bank or even a share of the bank.


> The law recognizes that the account holder owns the content of the account

Yes, because the money being deposited is already owned by the account holder, banks would not exist if the bank became the owner of the money deposited, such an arrangement makes no sense. However, banks are within their rights to close your account at any time for any reason, they just have to make sure you get your money back. If you're suggesting that websites should be required to "return your data", that seems reasonable to me.


Well, banks are required to return it; and they are required to provide an accounting of it; and in practice, they need to give everything to you in such a way that it's easy to go get setup at another bank. If Facebook gives us our contact list as a JPEG, then it's not really facilitative of transfer, whereas giving it to us as TSV or vCard would be.


I think I might have misunderstood your original argument. If what you meant by "property rights online" is that users should have property rights over their data, on this we agree. I am in favor of laws that would e.g. mandate any website that collects PII to provide tools to manage/export/delete/control the collected data. What I thought you meant was that users would inherent property rights over the website itself, that is what I disagree with.


Get yourself an ASN, an IP prefix, and a domain name.


> even though they are, of course, perfectly able to post elsewhere on the internet

My biggest concern is that the "they're private companies, they can do whatever they want" argument applies to ISP's, too - they're private companies as well. We've already seen Epik being "deplatformed" by PayPal (also a private company) for allowing controversial content to pass through their infrastructure. I'm not sure I agree that Twitter should be forced to reinstate the New York Post, but I'm having trouble disagreeing that they should carry more liability for what they do allow if they refuse to.


The hypocrisy on both sides of this debate shows how disgustingly Machiavellian American Politics has become. The ends always justify the means now; no consistency of ideology is required.

A party that hates regulating business suddenly sees the light when powerful businesses start removing them from the public square.

A party that pretends to be for free speech and free journalism suddenly has a change of heart when it benefits them going into an election.

A party that purports to stand up to big business suddenly supports the largest megacorps in the history of man because their interests are temporarily aligned.

A party that thought Citizens United was terrible because it gave the rights of citizens to corporations suddenly cares very deeply about Twitter/Facebook/Google/Apple's freedom of expression when used to eliminate critical voices on their platforms.

A party gets rid of the filibuster for judge nominations and then gets mad that they can't filibuster the other party's judge nominations -- or a party deciding it's suddenly ok to seat judges during an election.

Both parties are about power. Neither party has any remaining principles.


Very good point! I'd like to add that potentially the reason why these institutions (parties) act that way is because, we as people that form part of them, also don't have or adhere to any strict principles.

Edit: Grammar.


Both parties are comprised of huge amount of people with much different opinions. If you want ideological consistency, you will have to destroy two party system first. I am not saying it would be bad thing.

Also, I don't think ideological consistency the way you define it would even be good thing. Standing up to corporations does not mean corporation is wrong in each case. One also need to severly mischaracterize objections to citizen united in order to make it incompatible with twitters right to censor.


Twas ever thus. People believe whatever is most advantageous for them to believe.


I guess it just comes down to a matter of degree. An ISP is way harder to move away from than Twitter. Switching away from Twitter was both free and an enormous benefit to my mental health; switching from Comcast to Verizon was less so. It's a harder argument to make when it comes to the paypal case, but I think that again we see that Twitter and Facebook aren't really utilities at all. If Twitter bans you, there are plenty of places to go even as a business or public figure.


Sorry that's just wrong. If Twitter bans you there's no were like Twitter you can go. Twitter has become the communications platform for public persons to reach their audience. It you think it's on par with other platforms you're just uneducated.


About 80% of the US adult population doesn't use Twitter. I agree that there's no direct alternative to Twitter, and that does matter, but the idea that you have no choice but to be on Twitter just seems empirically untrue.


That 80% number is meaningless. Back when only 20% of people had internet or email or cell phone, would you have said it's ok for the internet, email, or phone company to be able to cut service if someone uttered the words "all lives matter" and offended the employees of the monopoly. It's silly. You lack perspective.

Every legislator, politician, celebrity, and public figure is now using Twitter is their primary way to reach audience for written communications...except the banned ones.


The attention of other people on twitter is not a utility, and I think comparisons to utilities like the phone network cloud the debate.


[flagged]


I'm not trying to tone police here, but I do think the hn guidelines are helpful in keeping conversations useful.

> Be kind. Don't be snarky. Have curious conversation; don't cross-examine. Please don't fulminate. Please don't sneer, including at the rest of the community.

https://news.ycombinator.com/newsguidelines.html


Close to every single journalist uses Twitter though. Most companies requires them to do it. So if you want to reach journalist you use Twitter. Twitter banning you thus greatly limits your ability to talk to the press.


> Close to every single journalist uses Twitter though

This isn't true, that's just your impression from within the twitter bubble. Even if it were true, you can easily contact journalists through many avenues, being easy to reach is part of the job description. Getting banned from twitter is a totally unimportant internet triviality.


If you think Twitter isn't the most important Journalistic tool nowadays you're living in a cave. It is.

There's literally no way to get more accurate news faster, by following sources you trust.


> If you think Twitter isn't the most important Journalistic tool nowadays you're living in a cave. It is.

Twitter is the most important journalistic tool? Not e-mail? Not end-to-end encrypted chat services? The entertainment website made up of mostly celebrity gossip, memes and culture war propaganda? I think you're the one living in a cave my friend, a cave built, owned and operated by twitter. Just because it's popular with many journalists doesn't mean its important. There's no doubt there are some amazing journalists who use twitter at some points during their day, but the overwhelming sum of what goes on twitter can hardly be considered "journalism".

> There's literally no way to get more accurate news faster, by following sources you trust.

That's your opinion, not a fact. There are many people who would say using twitter as your primary news source is a terrible idea. News on the internet is a billion dollar business, twitter is just one of many innumerable sources of real-time news on the internet.


Point #1) Sure you can say email is as important as twitter. Good catch. Although taking my words too literally. But none of the other technology artifacts rank even close. And BTW if Google started censoring emails they'd be in BIG trouble just like Twitter is. Thanks for helping prove my point.

Point #2) Twitter itself is not a news "source". News Networks however ARE. News is reliable on Twitter because you follow only who you trust, and you make the assumption that you're feed isn't being monkeyed with by Twitter. You're out of touch if you don't realize you can get more accurate news faster on Twitter than thru News Networks. Trust me on that. Plus what you get from the networks is all propaganda anyway, and generally not worth my time.


> But none of the other technology artifacts rank even close

Google end-to-end encryption journalism. e2e encrypted services keep journalists and their sources alive, it far outranks the petty trivialities of twitter.

> News is reliable on Twitter because you follow only who you trust

Every social media website has a "follow" feature, nothing is special about twitter that makes the publishers more trustworthy than any other social network.

> You're out of touch if you don't realize you can get more accurate news faster on Twitter than thru News Networks. Trust me on that.

I don't need to trust you, we can agree to disagree on your opinion that twitter is a good place to get news.


Most of that goes into the mode of "straw manning" my words, or switching topics, rather than addressing my points head on. Have a great night.


Sorry you feel that way, but I quoted you verbatim and gave direct responses to what you wrote, not sure what else you want. You also have a great night.


Honestly I just don't see that. For one, if you're a public figure that Twitter bans you can just maintain a website and your followers will spread the word. If you aren't a public figure, there are plenty of alternatives that serve just fine.

More important however is the issue that Twitter can only block access to Twitter. That it is difficult to switch ISPs only matters because with few alternatives an ISP blocking a site, or you canceling your contract, would lead to that site being inaccessible. The general policy 'companies shouldn't use their position to alter access to other companies' is more defensible than 'popular social media has to pretend to be a utility for a given value of popular and social media'.


It sounds like you just have a fundamental disagreement that Twitter has become the primary backbone for public communications between powerful people and their audience (for which there is no viable replacement because of Network Effect), as well as for real-time messaging between individuals and groups. Until you acknowledge that fact, you won't 'see' a problem.


Could you give a concrete example of a scenario where someone is left without any method of public communication without Twitter? Sure it's easier, and other sites might not give the same free random updrafts of popularity, but if people want to hear your message you just don't need Twitter.


If phone companies were allowed to cut service to people based on the political leanings of phone calls, then would you be saying: "It's ok, people can just find another phone company or use emails for communication, when their phone is cut"?

You don't think Twitter is analogous to phone service. I do. That's the disconnect here.


It's also a lot harder to move away when they actively suppress any discussion of or links to alternatives and even collude to do so. Or when they punt toxic users over but not the toxic yet profitable ones (such as the president...).


> My biggest concern is that the "they're private companies, they can do whatever they want" argument applies to ISP's, too - they're private companies as well. We've already seen Epik being "deplatformed" by PayPal (also a private company) for allowing controversial content to pass through their infrastructure.

This isn't new or unique to paypal. The Credit Card companies and banks that underly pretty much every fancy modern method of money transfer have and do "deplatform" people. And they've done it for ages. Porn companies can't use conventional credit card processors, and there's a long history of banks and credit cards refusing to do business with certain companies (https://en.wikipedia.org/wiki/Deplatforming#Financial_servic...).

Whatever your overall opinion of deplatforming, the idea isn't anything new or unique to twitter. ISPs were singled out (they aren't anymore, net neutrality was rolled back, now your argument only applies to phone companies).


> they should carry more liability for what they do allow if they refuse to.

The result would be opening up any and all moderated forums to a tsunami of lawsuits. Twitter would survive, they have armies of lawyers. HN would be shut down.


So maybe the law should be rewritten to target the entities that are large enough that they don't fear repercussions from their actions even though it could be proved in a court of law that they resulted in harm?


The issue with your ISP example is that all of them depend on public resources to function both directly and indirectly: They lease public land and right of way on utility poles, often with government-sanctioned monopolies.

The same is true for cellular providers: They lease spectrum that is owned by the public.


You are being intellectually honest to admit that we can not have one set of rules of ISPs and another set of rules for Twitter.

I think both of them should have the discretion to do whatever they want while the government should constantly work hard to ensure there is always more competition. It is the government which has made it impossible to create credible alternatives to Google, Facebook or Twitter. Firstly, US government will try to kill any foreign competitor in the name of national security or some other straw-man. Domestic level the regulation is darn too complex for anyone to succeed with reasonable capital.


I disagree- I can trivially move between social networks and in fact have done so recently. As a renter I have exactly one hardwired ISP available to me; AT&T cellular based service is not comparable where I live. It’s the same way we have different rules for your electric bill and Amazon.


If some news network moves from Twitter (from being censored, or banned) to a Mastodon, for example, their 'reach' would drop by a factor of millions. Six orders of magnitude. That makes them basically invisible.

Do you really want nose-ring kids in Silicon Valley who're still mad at their parents having the power to shut down Presidents, Heads of State, and Media organizations?

Come on man.


At some point, people have to terms with the downsides if they want to organize around non public entities that have the power to censor and banned wholesale when there are other alternatives; their 'reach' today may drop, but if people decided to abandon platforms because they are either locked out/censured/etc for those where they are not, their 'reach' will only grow in time.

But hey, easy for me to say since I've pretty much abandoned twitter/facebook/etc nearly a decade ago and avoid things like snap/insta/etc for things like a mastodon instance.


And when the phone was invented Alexander Graham Bell could have started listening in on phone calls and banning service to individuals who had differing opinions to his own right? Sounds like you'd be cool with that. Ya know. "non public entities" and all.

And your reply about that would've been: "Hey people can still send information using radio and telegraph, so they should just come to terms with that and stop trying to rely on phone systems that belong to Bell?"


If radio involved typing another url into my browser or installing another app, and the telephone didn't require running cables across large tracks of public land with the cooperation of government, yeah, maybe that would be how it could work.

In your social media example, the only case where it doesn't involve easy alternatives is the size of a public audience; so it would be more like a television host being dropped by the network because they don't like their views. If you are trying to communicate with friends and family, or even with large section of the public, there are absolutely alternatives. What you want is more akin to compelling a television network to air someone because they can't reach as many strangers without that television networks large audience.


TV/Radio broadcast has absolutely nothing to do with any of this. Those are "one way", and involve 10s of broadcasters rather than literally everyone in the world being able to talk to everyone else simultaneously in real-time. You are very very very confused about the issues.


It is like an anchor because the main ill you have claimed is that someone will lose reach to a large audience- and TV anchors show that we have no right to reach the audience enjoyed by a platform in at least that medium. Why do we need legal protection to ensure we can reach the audience enjoyed by Twitter? Why is reaching out to journalists via phone and email insufficient? Can the public not be informed by someone's website? Is it a question of degree and size?


Why do we need email when we already had the US postal service right? haha. Yeah, I guess it's all a matter of degree and judgement isn't it. good lord you're funny. Thanks.


I'm not ok with it, so I actively chose not to be on those platforms, like everyone else could.

If people want to be useds, that's their choice.


I agree with your actual sentiment there. I've actually built a platform myself that has 10x the features of Twitter, in some regards (although a different set of similar features)

I'm a huge fan of decentralized social media, web3.0, IPFS, ActivityPub, Mastodon, Pleroma, and the Fediverse, and all that is the FUTURE to replace Twitter. However for today Twitter needs to stop censoring.


> However for today Twitter needs to stop censoring.

Yeah, well that's not a battle I want to fight. Id rather twitter/fb keep pushing people off their platforms and kill themselves via death by 1000 cuts.


Well once again I agree with your actual sentiment!


I don’t follow your example. Heads of state and media weren’t dependent on “nose-ring kids” working on social media before and they certainly aren’t now just because Trump has shown a proclivity to communicate via Tweet. It’d be ridiculous to ossify twitter as a government communication platform through regulation, instead of continuing to invest in existing government-owned communication channels.

Re: the change in reach that happens, I don’t see how this is different from a television network canceling a host.


It's just a matter of objective fact that the world does now have a kind of 'public square' and it happens to be Twitter.

Thru a series of events no one planned we just evolved into a situation where a public figure's main way of reaching out directly to their audience is over Twitter.

The correct analogy here is the phone (not TV networks). When the telephone system was originally a monopoly (or even today) imagine if the phone company had started disconnecting lines or censoring calls? Alexander Graham Bell would've been hanged from the nearest tree.

But if you want to use a TV Network analogy it's not like canceling an Anchor, it's like canceling a 'viewer'. It would be like if CNN had the power to stop any individual from either consuming CNN content or creating CNN content. Total Godlike control over the individual.


In this case it is like canceling an anchor, because the people are complaining that their speech is being moderated under section 230. Whether a network like facebook can ban someone is a separate issue of tech company power, this regulation protects them to moderate content while not being legally responsible for the content they don't moderate.


I think Section 230 is a distraction from the real issue.

Here's the crux:

Q) Can AT&T censor texts or cut service if an AT&T employee gets triggered by the content of speech?

A) Not just no, but hell no.

The same applies to Twitter for precisely all the same reasons.


Can Twitter remove a tweet because advertisers don't want to appear next to it? Can Twitter remove a tweet because it suspects it is an action from a bot? Is it censorship to rank a post lower on someone else's feed? Section 230 is absolutely the crux here because it's about what we expect from internet platforms. Do we expect them to be moderated or not? If we expect them to be unmoderated entirely they would look nothing like the current platforms.


Sure there could be a checkbox somewhere to hide foul language, or even hide things that the progressive Twitter kids deem offensive (like "All lives matter" for example),

...so people who want to live in an online safe-space can run with "Safe Space" mode on.

But AT&T can't cut service, nor cancel texts after sent, etc, and for all the same reasons you'd give for phone companies not censoring I'd claim apply to Twitter.


If "safe space mode" is on by default, and you can't see a pundit without turning it on, I don't see how that is different from demoting or hiding a tweet behind a disclaimer. Or, for that matter, how it is really different from having to go to another platform where safe space mode doesn't exist, to even know it exists.

As an aside, the constant bashing of "twitter kids" and allusions to political speech like "all lives matter" being censored (which is not happening, as far as I know) seems counterproductive.


The problem with twitter is the censorship, shadow-banning, feed-manipulation, and permanent banning. Despite your confused opinions, there is no 'setting' to disable any of that.


I think you might have missed my point- you said that there could be a checkbox for people to hide offensive content, and I asked what the difference between manipulating someone’s feed so their posts are only seen by people looking for it directly, and having a content filter on by default. They would be functionally identical in how they limit the audience, which if I understand correctly is the main ill you are accusing twitter/etc of.


Yeah if only Twitter had a "Stop Shadow Banning", a "Stop Censoring", and a "Stop Banning Individuals" button, that would be great. lol.


We all have a sphere of influence, and we all have a duty to use that sphere of influence for good.

Free speech isn't free. It's a responsibility, and we should take care to fulfil that responsibility with caution and diligence.


New as of today and relevant to the below discussion:

https://news.ycombinator.com/item?id=24946809


Waiting for incompetent and self-aggrandizing "governments" to save us is a recipe for failure. Corruption is not a problem "legislation" can solve. Instead, we should all take responsibility and act together to make society a better place. Grab the bull by the horns. The buck stops here, with us. We should all stand strong as individuals, together. Sign our own executive orders. Hold our own hearings and make decisions for ourselves. Don't let the Zuckerbergs and the Dorseys trample over your privacy and your sanity. Take charge of your life. Whip it good!


Waiting on incompetent and self-aggrandizing “ourselves” to save us is a recipe for disaster. Corruption is not a problem “we” can solve.


When a problem comes along, some people run to their "government" to solve it for them. That's called Learned Helplessness.[1] Problems don't get solved because of "rules," they get solved when individuals get involved in community action.

To pick two recent examples, it's useless to wait for people in suits to "conserve" the gray wolf or enact fuel economy "standards." I vote with my wallet to pressure companies constantly to produce more fuel-efficient cars, and I promote wolves and wolf appreciation in my local community.

https://en.wikipedia.org/wiki/Learned_helplessness


> The most effective forms of censorship today involve meddling with trust and attention, not muzzling speech itself. As a result, they don’t look much like the old forms of censorship at all. They look like viral or coordinated harassment campaigns, which harness the dynamics of viral outrage to impose an unbearable and disproportionate cost on the act of speaking out. They look like epidemics of disinformation, meant to undercut the credibility of valid information sources. They look like bot-fueled campaigns of trolling and distraction, or piecemeal leaks of hacked materials, meant to swamp the attention of traditional media.

This is orwellian. I can already tell it's going to set up the argument that actual censorship is necessary to counter the "censorship" of harrassment, disinformation, trolling, distraction and piecemeal leaks of hacked materials, such as Trumps tax returns, the Panama papers, the Snowden files ...

It reminds me of the idea that "words or violence" or even that "silence is violence", justifying actual violence - in this case actual censorship - as a defensive measure.

Censorship is a very clear word. If someone wants to say something, and someone else wants to listen to him, and you interfere, that's censorship. If someone wants to say something, and someone else says something else that distracts from what the first guy says that's not censorship.

A common line to justify deplatforming is that you have no right to an audience, but it really is appropriate here: The mainstream media has no right to an audience. If people prefer to listen to Alex Jones, then maybe the mainstream media should ask themselves how they managed to fuck up peoples trust in them, rather then whining about the censorship of Alex Jones distracting from them.


This is only an issue because of social media consolidation and concentration. We used to have laws controlling media ownership and telecommunications interoperability(and social media are a hybrid between traditional media and telecoms)


I fear that the (likely) Democrat government will be completely unchecked. Too many in the media are completely bought in to a Biden presidency, and will never admit his faults.

That is not a recipe for good government regardless of political affiliation.


[flagged]


> safe-space participation trophy kids...judgment over all us unwashed masses...[s]urely even the Silicon Valley socialists...

Personal opinion: when you pepper your comment with dismissive angry pejoratives, it makes me think you're not at all interested in discussion, so I typically ignore comments like this.


The reason for the demeaning language about the kids doing the censorship is done intentionally to shine a light on the absolute total absurdity that THEY are the ones who currently have the power to censor the President of the United States. It's mind boggling that society evolved into this scenario.


> The reason for the demeaning language...is done intentionally to shine a light

I think this illustrates a difference of opinion. Some people take demeaning or emotionally charged language as a signal to pay attention. For me, it's a signal to take the rest of the speech with a big grain of salt.


They invented safe-spaces for people like you, too. Find one and feel comforted. Have a nice day.


>> Yes, it’s true: being de-platformed by these three big players can effectively knee-cap a player from the public sphere. That’s true especially when the platforms act in concert

What the big tech companies are doing seems to have crossed into criminal territory.


The tech giants have definitely benefited from US politics devolving completely into Culture War. With the politicians completely uninterested in genuinely learning, deliberating, and acting in the shared interest of the country, the corporatists can simply give the politicians the red meat they want (essentially, sitting timidly in a chair while some football coach who had enough name recognition to win his primary yells at him with the Terrifying Power of the State) and then continue business as usual.


> some football coach who had enough name recognition to win his primary

Wait, who is this? I know representative Jim Jordan was a wrestling coach and current senate candidate Tommy Tuberville was a football coach at several places.


The broader point is that the politicians’ power is dependent entirely on the success or failure of their public theatrics, so public theatrics is what they seek and what satisfies them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: