New Zealand and social media

My heart goes out to the families of loved ones at the mosques in Christchurch. After the Parkland shooting, I drafted a long blog post about gun violence but never ended up posting it. Even after editing it a few times a month later, it felt like the words or timing were always wrong.

Over the last couple of years we’ve seen a growing backlash against social media. I won’t look for the video of this tragedy from New Zealand, and I hope I never accidentally see it. It is heartbreaking enough with words alone. Every story I read last week kept pointing back to the frustration with how Facebook, Twitter, and YouTube are not doing enough to prevent their platforms from amplifying misinformation and hateful messages.

Margaret Sullivan of the Washington Post writes about the problems with social media leading up to and after a tragedy like this mass shooting:

To the extent that the companies do control content, they depend on low-paid moderators or on faulty algorithms. Meanwhile, they put tremendous resources and ingenuity — including the increasing use of artificial intelligence — into their efforts to maximize clicks and advertising revenue.

Charlie Warzel of the New York Times covers this too:

It seems that the Christchurch shooter — who by his digital footprint appears to be native to the internet — understands both the platform dynamics that allow misinformation and divisive content to spread but also the way to sow discord.

Nick Heer links to an article in The Atlantic where Taylor Lorenz documents how after following a far-right account, Instagram started recommending conspiracy accounts to follow, which filled her feed with photos from Christchurch:

Given the velocity of the recommendation algorithm, the power of hashtagging, and the nature of the posts, it’s easy to see how Instagram can serve as an entry point into the internet’s darkest corners. Instagram “memes pages and humor is a really effective way to introduce people to extremist content,” says Becca Lewis, a doctoral student at Stanford and a research affiliate at the Data and Society Research Institute.

Duncan Davidson asks: “What are we going to do about this?”

The last few years, the worst side of humanity has been winning in a big way, and while there’s nothing new about white supremacy, fascism, violence, or hate, we’re seeing how those old human reflexes have adapted to the tools that we’ve built in and for our online world.

I can’t help but think about Micro.blog’s role on the web whenever major social media issues are discussed. It’s almost all I think about. We feel powerless against world events because they’re on a scale much bigger than we are, but it helps to focus on the small things we can do to make a difference.

Micro.blog doesn’t make it particularly easy to discover new users, and posts don’t spread virally. While some might view this as a weakness, and it does mean we grow more slowly than other social networks, this is by design. No retweets, no trending hashtags, no unlimited global search, and no algorithmic recommended users.

We are a very small team and we’re not going to get everything right, but I’m convinced that this design is the best for Micro.blog. We’ve seen Facebook’s “move fast and break things” already. It’s time for platforms to slow down, actively curate, and limit features that will spread hate.

Manton Reece @manton