The Letterbook team have published some research into moderation tooling & strengths+weaknesses thereof on the Fediverse. It contains some great analysis and recommendations, check it out!

  • OpenStars@piefed.social
    link
    fedilink
    English
    arrow-up
    3
    ·
    6 hours ago

    I really like the approach taken by PieFed to provide greater transparency in the decision-making process, and enable the users to make their own determinations rather than solely the binary yes-no to removing vs. retaining content.

    As one example, allowing people to block users, communities, or even whole entire instances, in a true way rather than Lemmy’s way that calls itself an “instance block” but then acts as a mere “community mute”.

    And then further to turn off notifications for individual items, which allows people to cool off rather than have to resort to jumping straight to the ban-hammer (I am trying to be funny there, but really an individual cannot “ban” anyone, so I mean block:-).

    And a big one: auto-collapsing and even auto-hiding comments based on the content’s number of downvotes. If someone wants to click to expand it, then they are free to do so - and I’ve completely disabled the auto-hide feature for myself, by putting in a number of 10000 for the threshold - or not as they please. I would hope that the UI tools will get better in that regard, e.g. right now if you reply to someone and then receive a notification, it won’t auto-expand back out the auto-collapsed entries, so you really have to hunt around for what the notification was trying to direct you to (though not as bad as the more deeply embedded “Continue thread” where the notification tries to take you to something that isn’t even on the same page!). But even moderate implementation issues aside (of a very recently-added feature ofc), the theory is wonderful!:-)

    And potentially even bigger: placing a label next to a person based on their “reputation” score, or even for a whole entire instance (e.g. Beehaw), so that someone does not walk into a conversation with them unawares. They still have every right and ability to, just… hey, they were warned, you know?:-P

    A democratization of moderation, yeah it sure sounds nice:-) - ofc it still needs some “hard” limits such as spam and NSFL content that needs to flat be REMOVED as quickly as possible, but providing automated tools that allow for an entire spectrum of ability to engage or not engage with content is… just wow. So exciting, and futuristic, especially compared to the likes of the hard-nosed Reddit mods (and I say this as someone who was one of them, for two small communities - there really were only the 2 choices, and it sure would have been nice to have had more options to be able to choose from, between “allow” vs. “remove”!:-).