Is this another woe is me and my conservative allies thread?
Amazon recently terminated service for Parler on the basis of their inability to remove incitement, calls to violence, etc. which have been leading to problems in American political life.
Should they do the same for Twitter? Twitter continues to display incitement, calls for violence, etc. And Twitter likewise uses AWS.
Twitter Selects AWS as Strategic Provider to Serve Timelines | Amazon.com, Inc. - Press Room
Or should they not have removed Parler? What is your stance?
I have come across a few examples of such postings. See what you can find.
I did not see a way to report these, but that may be because I don't have an account at Twitter. If someone does, please feel free to report them.
In context "He" appears to be Jack Dorsey.
Amazon recently terminated service for Parler on the basis of their inability to remove incitement, calls to violence, etc. which have been leading to problems in American political life.
Should they do the same for Twitter? Twitter continues to display incitement, calls for violence, etc. And Twitter likewise uses AWS.
Twitter Selects AWS as Strategic Provider to Serve Timelines | Amazon.com, Inc. - Press Room
Or should they not have removed Parler? What is your stance?
I have come across a few examples of such postings. See what you can find.
I did not see a way to report these, but that may be because I don't have an account at Twitter. If someone does, please feel free to report them.
In context "He" appears to be Jack Dorsey.
Yes, Amazon, Google, and Apple acted properly to shut the water off for Parler.
Context matters. Apple, Google, and Amazon acted within a certain context. The context colored their decisions and motives. The context separates Parler from Twitter.
Parler’s meteoric rise in popularity is traceable to an influx of right wingers, which was a byproduct of an exodus of right wingers from what they perceived as the censorship tyranny of Twitter, Facebook, etcetera. An article in WSJ may have been a catalyst for some of Parler’s high numbers. WSJ ran a story of the Trump Administration was looking for alternatives to Facebook and Twitter because of their censorship. WSJ suggested Parler, and two days later the app was number one in the news category for iPhone, and in 7 days surged 500,000 new members, from one 1 million to 1.5 million according to Matze.
Parler attracted a lot of Trump supporters. Parler also had some noteworthy names. Some notable names are Jim Jordan, Tim Cruz, Devin Nunes, and Rand Paul, to name a few. Some of them actively recruited more right wingers to Parler. Jim Jordan tweeted to his 1.4 million followers to transition to Parler. Devin Nunes advocated his 1.1 million followers, what a slacker compared to Jim’s numbers, to shift to Parler.
Parler, perhaps, was entirely a right wing fiesta. In fact, Matze offered a $20,000 bounty for a progressive or liberal, with 50,000 followers on Twitter or Facebook, to open a Parler account. Ostensibly, Matze aspired for Parler to more than a right wing echo chamber. If true, then he’s had nothing but disappointment with Parler.
Parler attracted this right wing group, a large number of this group with a strong sense of self-victimization and extremely loyal to Trump, as a conservative paradise of laissez faire censorship. Matze sold Parler as, “We're a community town square, an open town square, with no censorship,” "If you can say it on the street of New York, you can say it on Parler."
They are a platform. They spelled out that they are not liable. The poster is the one with liability for their statements.Of course a wild, wild, west approach to moderating comes with risks. Enter Parler’s indemnity clauses. Why moderate when you can indemnify yourself when a lack of moderation blows up in your face.
Parler’s near nonexistent moderating and aversion to deleting or removing much of anything facilitated all sorts of right wing calls for violence and a violent revolution.
Trump’s lies, distortions, fomented the anger and exacerbated the calls for violence. All of this transpiring in a largely if not exclusively a right wing platform, perfect recipe for violence to be advocated, planned, etcetera, by many right wing supporters.
Parler was unique from Twitter and Facebook. Parler was disproportionately and exclusively a right winger platform, unlike Twitter and Facebook, where their extremist views and calls for violence met no resistance, no censorship, no moderation, and was allowed to spread like wildfire and infect many, in ways not very likely to happen on Facebook and Twitter because of Parler’s lax moderation.
Then the tragic events of 1/6 transpired. It didn’t take long for people to recognize Parler played a role, most likely are larger role than Twitter and Facebook for the reasons noted above. With calls for more violence in D.C. trending on Parler post 1/6, such calls for violence, which before 1/6 was more palatable as political firebranding on Parler, now cannot be tolerated at all post 1/6.
Simply, Twitter and Facebook were not and are not the party room in which right wingers gather in the millions and incite violence to a large receptive crowd, which is then perpetuated on 1/6. AWS treating Parler differently from Twitter is because Parler is uniquely different than Twitter. Parler, because of its characteristics above, is situated in a way that can better facilitate more violence by right wingers than Twitter or Facebook. Indeed, Parler members were not only involved in the protests at the Capitol building, but some entered the Capitol. Hence, Amazon rationally acted in regards to Parler in the manner they did.
One more thing. The tweets in your post which precede 1/6 aren’t persuasive, as they did not occur in the context outlined above. AWS acted in the context above, which includes the events of 1/6, and calls for more violence after 1/6. The risk of those threats of violence manifesting itself again, in part because of Parler, is what sets Parler apart from Twitter and Facebook, justifying disparate treatment, if any.
The tweets of violence posted above after 1/6 do not carry the same weight of risk of occurring, or of being legitimate threats, like the threats on Parler, because of the events of 1/6.
The U.S. Supreme Court is the party you'd be looking for there, who has "silenced" some speech -- determined/fixed the limits on American free speech.Remind
Reminds me of the old days when Christianity also silenced their opponents, but in different ways..
They have a volunteer staff, similar to CF, though CF's standards are far different.
They offered to ramp up an extra task force to deal with the recent calls for violence. This was not accepted by Amazon.
[/QUOTE]Amazon recently terminated service for Parler on the basis of their inability to remove incitement, calls to violence, etc. which have been leading to problems in American political life.
Should they do the same for Twitter? Twitter continues to display incitement, calls for violence, etc. And Twitter likewise uses AWS.
I think they should do as they like within the law and allow the free market to decide their fate.
Why, what do you think they should be forced to do, and by whom?
My understanding is that Twitter has some software moderation -- an algorithm that evaluates posts and determines if they violate the Terms of Service (ToS). The human moderators are there to catch the things the software moderator doesn't, as well as acting to correct things the software moderator may have incorrectly flagged. This software moderater makes it easier for the humans to find the posts that slip through, though it still a difficult job where things will be missed.
My understanding is that Amazon was saying that Parler needed to create a software moderator, that human moderators would not be quick enough, not to mention that too much "bad" content (content against the ToS) would slip through human moderation. Parler refused to do this, so Amazon followed through in cancelling their contract.
since when is the present day old days?Remind
Reminds me of the old days when Christianity also silenced their opponents, but in different ways..
Is it moral for us to force a business -- that is, other American running their own business -- to operate their business according to our personal preferences, past the letter of the law?
Of course, that answer is 'no'.
Here's the business side of their own (right to choose) choice:
"Amazon is a for-profit company.
To host Parler over the next few months or year they would make some modest amount of money .... Not a lot of money.
But, hosting Parler at the same time Parler is hosting people planning to break the law in a serious way and do violence might expose Amazon to being sued in a huge lawsuit ....
$10,000 profit isn't much for risking what could even be $10 million or $100 million of liability from Parler's members actions over time ...."
Of course, this same kind of argument can apply regarding twitter (a vastly more profitable customer for them to serve), and I bet Amazon Web Services is going to think about that also (that's my guess).
Probably they will simply require Twitter to make an effort. (just my guess)
But, Twitter has evidently been making some effort. Just not enough I bet we agree.
But, hosting Parler at the same time Parler is hosting people planning to break the law in a serious way and do violence might expose Amazon to being sued in a huge lawsuit ....
$10,000 profit isn't much for risking what could even be $10 million or $100 million of liability from Parler's members actions over time ...."
section 230 in the first place. It shields platform providers for liability for statements made by third party users. They are not the speaker, they only host the speech.
47 U.S. Code § 230 - Protection for private blocking and screening of offensive material
(1) Treatment of publisher or speaker
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
This has been interpreted very broadly.
In Force vs. Facebook it was ruled that Facebook was not liable for Hamas elements coordinating terrorist activities on Facebook.
In Delfino vs. Agilent Technologies Inc. it was found that Agilent, as a provider of email service, was not responsible for threats of harm from one employee to another employee.
Not only is the protection very broad, but it also makes for rather straight-forward handling in many cases.
This article describes why section 230 is important to maintain the modern internet with various content providers It discusses benefits of section 230 when examining case history.
How Section 230 Enhances the First Amendment by Eric Goldman :: SSRN
A few highlights:
Section 230(c)(1)’s immunity does not vary with the Internet service’s scienter. If a plaintiff alleges that the defendant “knew” about tortious or criminal content, the defendant can still qualify for Section 230’s immunity.
While the First Amendment sometimes mandates procedural as well as substantive rules, Section 230 offers more procedural protections, and greater legal certainty, for defendants. These procedural benefits create speech-enhancing outcomes even in situations where the substantive scope of Section 230 and the First Amendment would be identical.
A prima facie Section 230(c)(1) defense typically has three elements: (1) the defendant is a provider or user of an interactive computer service, (2) the claim relates to information provided by another information content provider, and (3) the claim treats the defendant as the publisher or speaker of the information. Often, judges can resolve all three elements based solely on the allegations in the plaintiff’s complaint. Thus, courts can, and frequently do, grant motions to dismiss based on a Section 230(c)(1) defense. In jurisdictions with anti-SLAPP laws (which provide a litigation “fast lane” to dismiss lawsuits seeking to suppress socially beneficial speech), courts can grant anti-SLAPP motions to strike based on Section 230 without allowing discovery in the case.
In other words, platform providers can be rather certain that cases regarding liability for content users provide will be dismissed. And the cases usually don't get very far for this reason, meaning lower costs. That was why the provision was made, so that companies would not have to review every bit of speech on the platform and take it down at the first hint of complaint for liability reasons. This promotes more speech.
Legislation passed in 2018 did alter one aspect of that liability in the case of knowingly promoting sex trafficking ads, material, etc.
Section 230 immunity is not unlimited. The statute specifically excepts federal criminal liability (§230(e)(1)),
So, that's why I think AWS would become liable -- at least on a criminal level, but I don't assume that's all (!) -- and sec 230 would not defend them, because it became common public knowledge what was going on, and also AWS definitely knew.
Like a gray area, when they know what is happening.The case law has largely not upheld that. Take a look at this document from the justice department discussing the need for possible reform to address the broad interpretation given by the courts. This is from the Trump Justice department from June, 2020.
The combination of significant technological changes since 1996 and the expansive interpretation that courts have given Section 230, however, has left online platforms both immune for a wide array of illicit activity on their services and free to moderate content with little transparency or accountability.
One of the suggested reforms:
Case-Specific Carve-Outs for Actual Knowledge or Court Judgments. Third, the Department supports reforms to make clear that Section 230 immunity does not apply in a specific case where a platform had actual knowledge or notice that the third party content at issue violated federal criminal law or where the platform was provided with a court judgment that content is unlawful in any respect.
They want to revise the law to make it so that immunity does not apply in cases where there was prior knowledge. But the reason they want that reform is that the current practice is to not hold them liable.