01-31-2021, 12:34 AM
If I understand it correctly, a group of people in the Reddit community "r/wallstreetbets", encouraged each other to buy GameStop stocks and it seems the intention of many of them was to trip up the short selling of the stocks by large funds. For the sake of argument - that is for the sake of the point I want to make - let's assume that their actions are a form of collusion that is illegal manipulation of the market. [Even if it is, I don't think most of the participants had any idea it might be illegal and intention often plays a big role in the law.] If it is illegal, it is fortunate for Reddit as an internet platform that they have Section 230 protection from being found to be a partner in the illegal collusion.
There is a lot of talk now about it being time to modify Section 230 to make platforms more liable for the contents found on their platforms. It don't think it is wise to completely do away with Section 230, but I would like to see that platforms - some of them making mucho-billions of dollars - do moderation where some compelling social needs supersede their protections against objectionable content.
I don't want to snuff out platforms like Reddit, though, that rely on volunteer moderators. I wouldn't want to make them liable for what was said by the community of users on r/wallstreetbets. But there are some social needs - like not allowing child sex traffickers - where they have to muster the resources to filter out socially damaging things. I'll admit, though, that it's a huge problem to come to a consensus on many things (not child sex trafficking, though, fortunately) about what constitutes "sufficiently socially damaging".
I'm thinking maybe we could rework Section 230 so that if an internet platform wanted Section 230 protection they would have to show that they have systems in place to filter out content that is deemed too dangerous - with "too dangerous" to be spelled out with some precision in the legislation changing Section 230. Platforms would be free to operate without Section 230 protections, but that would leave them more at the mercy of debilitating lawsuits about content.
What, if any, modifications do you think should be made to Section 230 protections?
There is a lot of talk now about it being time to modify Section 230 to make platforms more liable for the contents found on their platforms. It don't think it is wise to completely do away with Section 230, but I would like to see that platforms - some of them making mucho-billions of dollars - do moderation where some compelling social needs supersede their protections against objectionable content.
I don't want to snuff out platforms like Reddit, though, that rely on volunteer moderators. I wouldn't want to make them liable for what was said by the community of users on r/wallstreetbets. But there are some social needs - like not allowing child sex traffickers - where they have to muster the resources to filter out socially damaging things. I'll admit, though, that it's a huge problem to come to a consensus on many things (not child sex trafficking, though, fortunately) about what constitutes "sufficiently socially damaging".
I'm thinking maybe we could rework Section 230 so that if an internet platform wanted Section 230 protection they would have to show that they have systems in place to filter out content that is deemed too dangerous - with "too dangerous" to be spelled out with some precision in the legislation changing Section 230. Platforms would be free to operate without Section 230 protections, but that would leave them more at the mercy of debilitating lawsuits about content.
What, if any, modifications do you think should be made to Section 230 protections?