Consensus
Sat 28 Sep 2024 5:02PM

the "how can we proactively address racism"/"what can we learn from Cohost" discussion

AB Alyaza Birze Public Seen by 36

as we move toward launch and into the beginnings of the Website League, it's time to start confronting probably our hardest question(s):

  1. how can we proactively address racism?; and

  2. what can we learn from Cohost's failures on this front and others?

there are probably other questions that will flow from these two, but in the interest of not being overwhelming let's start with those two.

My feedback

my feelings on this, and the specific failures Cohost had are elaborated at length in my postscript to Cohost So White but, to summarize:

  1. Cohost had an incredibly white founding userbase, and almost everybody that userbase invited was also white. this made it nearly impossible to invite, attract, and retain minority users

  2. Cohost's founding community had aggressively parasocial and protective elements that often reacted negatively to criticism of the site, and victim blamed or dismissed users for "creating their own problems" and "using the site wrong"

  3. bad actors were not dealt with and the moderation was, in staff's own words, cowardly at times, leading to an accumulation of bad outcomes. this might be an individual failing or a systemic one; but either way it's a failing.

  4. Cohost fell victim to this dynamic: We also need to be hyper aware of minorities which are relatively rare due to the law of large numbers. [...]let’s say only 1 in 100 people are trans - we now have an equal amount of trans people and jerks on the internet. If even 1 of these trans folks don’t want to engage with a jerk, a single bad actor can quickly drive all transgender people off a platform like this one.

  5. Cohost's anti-harassment measures were not useful and failed from first principles. the issue was more abstracted and hidden for regular users, but still apparent to anyone targeted by a harassment campaign or just the center of attention generally.

  6. in the absence of technological measures, many of the insinuated benefits of Cohost—the things that enabled its lack of harassment—were purely cultural. this culture broke down frequently, especially with users who did not at least superficially resemble the fabled Founding Cohoster in demography

  7. the sum of this was to make Cohost's community feel hollow, fake, and performative in its anti-racism.

External feedback

i've also solicited feedback from other relevant users (and those in close contact with them) of what they feel the problems are/were. here are some of the responses as of now:


  • you can’t have different moderation standards for your friends

  • knowing whether a report hasn’t been seen or has been seen and decided isn’t an issue can be helpful. with multiple instances, it could encourage someone to stay/leave their instance, which is actually a choice in this system vs cohost. the instance i use doesn’t do this for individual reports but does do it for instance bans, so it might not be necessary if there’s enough trust in the moderation (i have this trust from simply not seeing shit that i’m aware is happening, because they already banned it), but i would wonder if coming from cohost if that trust might be fragile?

  • you gotta moderate besides just reports… again, i assume this is much easier with a smaller instance where you can plausibly just browse and check things out!


  • Leaving wild-ass comments unchallenged and chastising people who pointed them out for tone (thinking about the “neotenuous” commenter in particular)

  • Dismissing criticism about racial (or really any) issues as either unhelpful whining or active sabotage

  • Using the prospect of disunity being a threat to the project as a whole as a way to shut down criticism


I don’t think the anti-discovery / anti-harassment measures on cohost really had much to do with the site’s culture over the fact that registration was heavily restricted to friends and family for a long time.

I think in many ways it was a drawback because it didn’t actually stop targeted harassment and made it harder for any sort of support network to form around most of it, because the visibility was kept low. I described it to the effect of an anti-solidarity measure.

It’s not actually possible to engineer a system that prevents abuse of itself, which is a major reason why human supervision (i.e., moderation) is necessary and needs to be proactive. I think cohost believed that having these measures would allow for less time and attention spent actively moderating content. This would match up also with the presumption that it was more important to moderate for tone / trying to prevent fighting rather than to actually manage long-term outcomes that these disputes are tied to (which, go figure, tended to hurt racially marginalized users more).


If mods have rules of engagement, then those rules should be equally applied across the board. [...] Whether its by reports or alot of people complaining, mods engaging with a potential issue should be equally applied across the board.

WM

walking mirage Sat 28 Sep 2024 5:13PM

We will need to come up with a concise and simple set of rules for moderators on top of the instance operator CoC, and a mechanism by which people can escalate things if a moderator's not behaving as they ought to. it seems like escalation directly to the stewards as a whole would be useful, but as sirocyl's mentioned, report bombing can also be a problem.

we will also need a conflict resolution system. we have a lot of social infrastructure to build along with the technological infra.

should we have some kind of "pair programming" type thing? randomly selected stewards checking in with an instance to make sure things are going ok, or similar things?

A

atonal440 Sat 28 Sep 2024 5:45PM

@walking mirage I like the "pair moderation" idea a lot. We should be trying to create a moderation culture that's less defensive, less personally invested in each decision, and willing to revisit things. Moderators should always be asking each other "so what were you thinking when you decided this? Is that still what you think?"

S

sirocyl Sun 29 Sep 2024 10:34AM

@walking mirage I briefly mentioned it in the Discord, but the idea of having a social infrastructure working group may be a good task to stand up.

And, something akin to a "reporters' ombudsman", as part of this, for moderators and other Stewards to check ourselves against, may also be prudent to empanel.

V

vis Sat 28 Sep 2024 5:36PM

Re: anti-discovery features + moderating beyond reports: admin-fe (akkoma admin panel) has a way for admins to view all of an instance's posts. (statuses > select the local instance > check "show only local statuses") This should be used proactively by akkoma moderators.

re: 3. I think this is partially due to the weird hangup white folks (i am a white folks) have about combating racism proactively. There is a misguided belief that white people can't speak on racism and should therefore defer all discussions of racism to people of color. Which, y'know, puts a lot of responsibility on people of color to defend themselves, and none on white people to defend POC from racism. That doesn't work when you have the responsibility to moderate a website.

(note: i feel the same way about cis folks dealing with transphobia, and that's where this opinion comes from; i am trans. but that's off-topic)

As for how to combat that:

  1. I think it should be made very clear to moderators and potential moderators that it is their responsibility to enforce the website league's rules despite their personal hangups or awkwardnesses around that enforcement. But that's a soft rule; a material solution might be:

  2. The ability for moderators to check in with each other about moderation challenges and feel secure that they are doing the right thing while they're learning how to do it. This can be a zulip or discord chat; I don't like the idea of using loomio or a forum, because moderation decisions do have to be made pretty quickly.

  3. Diversity in moderation, which goes back to diversity in users.

re: 1. This is part of why I feel a little bit hesitant about opening too slowly. I want to avoid moderation load, but I also am noticing a lot of instances that are set up specifically for friends, which are going to replicate the same problems that cohost had. This is part of why I intentionally set up a public node: I want strangers on my website, and I don't mind dealing with the problems that having strangers on my website entails.

M

Mori Sat 28 Sep 2024 5:39PM

I feel strongly about moderation transparency. It helps keep moderators accountable and increases faith that moderators are making wise decisions.

I don't know to what degree moderation decisions need to be public, but at a bare minimum, people reporting things need to know if their report has been seen and what has been done about it. If moderation action was not taken, some explanation of why no action was taken might also be prudent.

As far as keeping moderator bias in check and making sure we're applying rules equally, I think we need:
1. Clear rules for moderators/examples of the types of behavior we're unwilling to tolerate.
2. Guidelines for when and how to de-escalate situations.
3. Some form of inter-instance moderator cooperation/accountability.

A

atonal440 Sat 28 Sep 2024 6:11PM

Not reproducing the whiteness of Cohost from a seed group of cohost users will require a ton of user growth from people who never used cohost. I think the best thing we can do to help that along is to provide a neutral platform, not in the stupid "I don't see race" way exactly, but a place where people feel their contributions to the site will be read in generous good faith.

O

ocean Sat 28 Sep 2024 6:24PM

Can we add these to a goals living document? I feel like that would be nice place to start aggregating these goals.

I

isomorphism Sat 28 Sep 2024 7:10PM

Loomio has support for something like that, yeah? Maybe on the wiki. We've been using google docs, but making something here or on the wiki seems preferable.

what I've got so far from replies here, is that the following things could be, say, threads in the governance working group:

  • writing a concise and simple set of rules for moderators on top of the instance operator CoC. includes:

    • Clear rules for moderators/examples of the types of behavior we're unwilling to tolerate.

    • de-escalation guidelines

    • language on applying rules fairly and not letting awkwardness/closeness/conflict avoidance/other typical white tactics get in the way of preventing racism

    • language on

  • establishing our conflict resolution system . would this include users having a way to say "hey i don't think this was moderated well" to the stewards or other appropriate group?

  • establishing systems for moderators to check in with each other / establishing inter-instance moderator cooperation/accountability. Pair moderation and a group that checks in with moderators is a suggestion here.

    Past that, I'm thinking about, like... what language users see as they sign up, interact with us, and log in. "No racism" is in every CoC and ToS i've seen, and we all know how that goes. The CoC as we've currently got it is good , but is not the only place I'd want anti-racist language to show up as a someone signs up. Like, in our "how to join a node", "how to use the league" guides or other ingress / directing people to instances flow, can it be there? Can we have broadcast put out a message that node operators boost for people doing friends and family instances that aren't coming in through the more public process?

not saying that having people see stuff up front is like, the end all be all, but it hopefully does set standards that we can be held to going forward.

K

Katja Sat 28 Sep 2024 7:32PM

I'm going to give my thoughts on somewhat of a point-by-point basis.

  • Founding user base: in addition to actively being more proactive in not disproportionately inviting white people, we absolutely need to not encourage white friends of ours who have more-troublesome-than-average hang-ups about race to get involved. If any of us have white friends who would really enjoy the Website League but who also tend to shut down or act out inappropriately, even privately, when race comes up, we should not be inviting them to define the League's culture at all.

  • Protectiveness: we can't just assume that the lack of financial peril surrounding our effort as compared to ASSC's will prevent this kind of dynamic from arising. (It might be worth actively encouraging League users to give us comments and criticisms freely and early, and also openly accepting feedback from outside the League — taking a "thanks for the comments; we'll seriously consider this to whatever extent appropriate" approach to "the haters" is going to be a lot better than even implicitly inviting users to be defensive of us.)

  • Moderator cowardice: I will once again reiterate my position that we should be taking a "(delete/suspend/ban) first; investigate fully as soon as practical" approach to conduct that appears to be bigoted or oppressive, especially when the parties who might have been engaging in prohibited conduct are friends or Big Name Users™. This also ties into the law of large numbers thing; especially at any time when the League is small enough that there might be one community of people in a certain minority demographic on it, it's better we get a reputation for being assholes to People Who Are Sincerely Asking Questions Not In Bad Faith or whatever than letting others' misconduct chase off those communities.

  • Technological anti-harassment measures: I think I'll leave this for people who are less uninvolved in the tech side of things, but absolutely, I agree with your observation here.

  • Moderation action transparency: we absolutely need this, yeah. At the minimum, to individual reporters, and we need to have clear standards on when transparency rises to the level of being public posts.

  • Moderation beyond reports: depending on what the technical situation is for operators and staff on various nodes, it might be good to get staff to conduct searches for terms which are likely to appear in troublesome discussions.

  • Visibility and how that relates to harassment: application of the rules related to harassment and what not needs to be done in a way which doesn't have a chilling effect on highlighting active or recent harassing, bigoted, or oppressive conduct. We can't borrow, say, late '10s Mastodon's dogmatic rejection of Screenshot Dunks™ or whatever. Seeing as we're using a technological setup that's intended to discourage uncontrolled social firestorms, we need the enforcement side of things to not get in the way of efforts to shut down bigots and harassers.

  • Also, yeah, we should have a separate document setting out expectations for node staff re.: how to apply the League Code and how to moderate generally.

V

vis Sat 28 Sep 2024 11:24PM

@Katja

I will once again reiterate my position that we should be taking a "(delete/suspend/ban) first; investigate fully as soon as practical" approach to conduct that appears to be bigoted or oppressive, especially when the parties who might have been engaging in prohibited conduct are friends or Big Name Users™.

seconded. and, i really don't think there's anything wrong with timing people out/temp suspending them for getting in really heated arguments, even if no one broke CoC. often what that situation needs is for people to back up and cool off, anyway

Load More