Bluesky’s figh against harassment: Moderation reports skyrocket by 17x in 2024

Following its exponential growth in 2024, Bluesky recorded a 17x rise in moderation reports, indicating a challenging task for its Trust & Safety team. The majority of the reports were associated with account harassment, trolling, and intolerance. Unfortunately, this continues to be a pressing issue for the budding social network and has sparked severe demonstrations against individual moderation rulings.

Bluesky added previously unprecedented 23 million users in 2024 for a myriad of reasons, most predominantly due to significant policy changes on competing platforms such as Twitter/X. The increased user engagement, however, placed a more substantial burden on the platform’s moderation team and led to round-the-clock engagement with graphic content. To manage this, Bluesky expanded to around 100 moderators and even implemented psychological counseling for its team.

The overall moderation reports culminated at 6.48 million in the previous year, a stark rise from the 358,000 reports in 2023. Bluesky, to enhance user interaction, will integrate moderation reports directly from its app this year. This similar to practices used by X, will allow users to more conveniently track actions and updates, while also simplifying the appeals process.

Bluesky, in the light of enormous influx of reports, began automating various types, not limited to spam, to boost efficiency while it addressed the increased volume of reports. It led to “high-certainty” accounts reports being processed within seconds compared to the previous 40-minute duration. Human moderators were kept looped in to manage false positives and appeals process.

A total of 1.19 million active Bluesky users, making up 4.57% of its community, took part in the reporting process in 2024. The majority of these reports – 3.5 million – were associated with individual posts. The users of Bluesky showed clear preference for a more respectful social network through their reports predominantly of antisocial behavior such as trolling and harassment, shedding light on the negative impact of the toxically pervasive culture on X.

Further elaborating on its fight against misuse, Bluesky shared information about its labeling service update. The service resulted in more than 55,000 ‘sexual figure’ labels and over 22,000 ‘rude’ labels attached to posts and accounts. Also, a total of 93,076 users appealed against 205,000 moderation decisions in 2024.

The moderation efforts took down 66,308 accounts by moderators and 35,842 by automated systems. Bluesky complied with 146 out of 238 requests from law enforcement, governments, and legal firms. Importantly, Bluesky’s response to child safety/CSAM reports is of note, with 1,154 confirmed CSAM reports submitted to the National Centre for Missing and Exploited Children (NCMEC).

Original source: Read the full article on TechCrunch