I remember back before YouTube had ads....it was a brilliant website.
I used to be squarely on the side of property rights on this issue. I used to think that "it's their website, they can do whatever they want with it"...and that's a solid argument 99% of the time.
I'm not sure anymore though.
It seems like at a certain level, a certain amount of web traffic, it becomes a tool that grants the owner a kind of power that threatens democracy itself. If someone invented a space laser that could vaporize everyone at a gps coordinate...I can't imagine people would argue "well it's their property".
I'm glad to see Facebook has taken a pro-free speech stance. I'm a bit curious what made Zuckerberg move in that direction. I read where he stated that he didn't want FB to favor one candidate or a particular narrative over another. The author of the article basically argued that he had some kind of responsibility to remove certain viewpoints or he was "complicit" in their promotion.
That's shockingly ignorant viewpoint imo.
I also saw that Facebook is going more that direction. I think there are a few things driving that. The purpose of Section 230 was that platforms that publish personal content would have some ability to censor on a "good Samaritan" basis. And people generally understand if they go after terrorists, calls for violence, adult content, etc. When they start choosing which politicians you can hear, that seems a bit different, and has led to the notion that they are acting as publishers with their own editorial license. When it comes to a business having their own rights to free speech, they can do that at any point, if they are acting as a publisher with editorial rights. But the protections afforded to these large tech companies under section 230 was supposed to be for the purpose of allowing them to host other people's speech, to allow for more input.
The courts are in a difficult situation because section 230 has allowed a lot of good things. If you didn't have liability protections then you would not be able to have a lot of the sites we do that allow for people to have a voice, because they would be liable for every user's opinion. So in that sense they do not want to remove protections for liability,. because that would ultimately remove their ability to allow people to speak. So while the law stipulates good Samaritan removal of potentially harmful content, the courts have generally interpreted that almost without limit, except the limits in the bill that state they have to remove child exploitation content, etc.
We can see why when we look at the smaller, specific purpose forums, etc. (such as Christian Forums). There may be a reason to indicate some things are off topic, or objectionable to the purpose of the site.
So it would actually be hard to change section 230 and still allow for the internet as we know it. And the courts have upheld a ton of cases that they can edit or remove content, or remove users, etc. Yet there has still been proposed legislation and hearings on revising section 230.
However, lately the focus has changed to breaking up big tech and anti-trust. And the EU has levied fines of considerable size (though not for Google's scale per se).
If they break up the larger tech platforms then that might solve the problem without having the issue of trying to tweak section 230.
Anti-trust laws have dealt with anti-competitive issues, and while some politicians have talked about bias being a part of competition, that is not so clear legally. But it could motivate to take action if they have other evidence of anti-competitive measures.
Also, I think the recent issues that various companies have been having with China trying to limit what they say through cutting off their market has caused a backlash against crackdowns on speech. It is one thing for Americans to discuss free speech in our context, in which we have some examples of viewpoint discrimination. But when we look at the great firewall, and the social credit system, and tailored search and social media for their nation, as well as re-education camps for minority groups, we are reminded where that eventually leads.
If I recall correctly Facebook is banned in China anyway, too much possibility for people to see non-state approved info. So they are not risking much to embrace the free speech backlash.
If they cannot get access to that large new market, and they are now seeing a stall in growth in the domestic market, and some backlash to privacy violations that have caused people to leave the platform, they are then forced to change their tune if they want to keep monetizing people's data in their existing markets.
Ultimately, I think it is backlash of the public that would have an impact. There is not enough political will to change the laws, and I don't think it would be an easy fix. If folks like Trump, Gabbard, Cruz, etc. who have raised these issues want to fix things they should identify alternatives and urge people to move to them. There are some out there. But part of the problem is that platforms that are more open to controversial speech have been smeared as extremist as a result, because those banned at other platforms go there.
At the same time content creators are not in a hurry to leave the large platforms because they need the reach that the big tech's market dominance brings. That is where the people are, and if they move they have a smaller pool to draw from. So they face on an individual level what some of the companies wanting to do business in China face. Do you deal with their limitations, restrictions, and algorithms, or do you settle for a smaller overall market where you do not have to self-censor?