by
Tracy Chou
March 3, 2022
Earlier this week, President Biden issued a call to address the harms inflicted on children by social media: accountability for platforms, increased privacy protections, limits on data collection, and bans on targeted advertising for children were all offered as concrete policy objectives moving forward.
But putting the onus on platforms to fix problems alone won’t be sufficient to address the broader issues social media can cause for individuals of all ages, not to mention society. Regulation can also open up platforms so others can help solve these problems too. Fundamentally, we must give users themselves more direct power over their digital experience, independent of what the platforms choose to build.
Today, what you see online is a product of design for the average. With hundreds of millions of global users, the algorithms that command our attention necessarily need to optimize for the many, not for you.
Consider two simple examples: tagging users and content recommendations. From LinkedIn to Instagram, platforms have made the design decision to allow anyone to tag you in a post — and to let you know when it happens via a notification. This is great for the platforms: it drives up engagement! For average users, it’s convenient to be alerted to a tag; they probably want to know about it. But for people who experience regular harassment, this design decision has opened up a vector for abuse, as malicious people can spam them with tags in harassing and violent posts.
Content recommendations offer another example. Take recommended Tweets in your timeline: though an average user might appreciate discovering new content created by those outside their follow lists, for many others, the recommendation algorithms disrupt what would otherwise be a very carefully curated feed, and expose them to content they absolutely don’t want to see. That build-up of irrelevant noise drowns out the content they actually care about, and makes it harder to get the value they want from that platform. And of course, the problems only compound when we consider the negative externalities of “related content” recommendations on platforms like YouTube, where misinformation or radicalization can rapidly proliferate without oversight.
But because of their scale, it doesn’t make sense for the platforms to cater to the long tail of users and their diverse range of preferences. So today, we’re forced to make do with what they offer, however annoying or damaging it may be.
There’s a better alternative, and it comes in the form of what Daphne Keller of Stanford calls “middleware”: tools that sit between platforms and users, giving you the ability to create the digital experience that best serves you — however that looks. If you have the chance to decide what matters to you — what you want to consume, when, and how, you can craft an experience of the internet that helps you meet your goals, see truly relevant information that meets the standards you set, and avoid the overwhelm that comes with drinking from the undifferentiated digital firehose every day.
Given the constantly changing landscape of platforms and digital surface areas that we need to interface with, this can’t just happen for each platform independently. Instead, we need a new layer on top of each user’s broader digital experience, to help them filter the noise, connect with what they care about, and discover the opportunities that the internet has always promised — but not always delivered.
Consider a few small examples of what’s possible when users, not platforms, get to decide how their attention gets directed (and don’t have to worry about self-expression or professional obligation resulting in immediate harassment).
The relationships we build in digital communities of interest are powerful and profound, and help so many — particularly those with marginalized identities — feel less alone in the world. Yet today, engaging openly and earnestly in these communities online also often invites acute harassment from others.
Too many people feel the need to self-select out of the conversation to preserve their mental health, paradoxically cutting themselves off from the very community support they need. With better support for individuals to create their own boundaries, they could more easily protect their mental wellness and continue to enjoy the rich connections that their communities provide.
The deluge of content online today makes it increasingly difficult to identify misleading or blatantly false information that may show up in our feeds. But what if you could choose your own criteria for what types of content you want to see? Maybe you prefer to receive updates on the COVID pandemic from expert scientists, or to preemptively filter out articles that reference scientific studies that haven’t been peer reviewed. You could even choose only to see reporting on politics that has been fact checked by an independent assessor you trust.
By allowing users to pre-filter based on concrete criteria that match their personal standards, we can help individuals avoid inadvertent exposure to the types of content they would never choose to engage with in the first place. And allowing the user to proactively make these calls helps to avoid some of the challenges introduced when platforms make all judgements about which publications to boost or hide.
We’ve seen firsthand over the past few years how the courageous actions of corporate whistleblowers in the tech industry on social media have led to change in policy at both the company and even legislative level — but not all workers are in roles or industries that can allow them to withstand the inevitable harassment that follows speaking out. Journalists, too, have shed essential light on powerful players in industry, government, and culture, only to be faced with devastating online attacks (and newsrooms often have not yet developed the infrastructure to support them effectively).
More broadly, today we run the risk of losing out on incredible insight, activism, and perspectives because people with marginalized identities are disproportionately likely to be targeted with harassment and negative attacks, and they know it. Countless people self-censor because of the bad faith responses, snarky comments, and abuse they fear they may receive if they choose to exercise their voice online. If it's easier for everyone to filter out these harmful attacks, more critical voices will join the conversation, and all our experiences of the internet will be richer for it. Even discourse amongst people of differing opinions becomes more possible when we can focus on civil engagements and not have to deal with abuse simultaneously.
It’s difficult for elected officials to respond to their constituents (or monitor their concerns) on social media today, because they’re overrun by harassment and death threats. More powerful tools to sift through the noise to find the earnest inquiries of their communities will allow them to see good faith feedback, address concerns, and stay more engaged than is currently possible for many.
When faced with the overwhelm and toxicity that so often accompany our current social media experience, it’s easy to consider just logging off altogether. But disconnecting altogether means losing all the opportunities, connections, and creativity that social media has spawned. It means giving up some of the most important pieces of our personal and professional lives today. It’s not “just” online — it’s real life, and it matters. We deserve the tools to make it better on our own terms.
Although the platforms have sometimes historically opened up proactively, in the past year there has been a disheartening move in the opposite direction. It’s clear social media companies will need different incentives in order to support the future users deserve.
Want to help? Some suggestions:
And if you’re really passionate about this opportunity, come and join us at Block Party! It’s time to create the digital experience everyone deserves.
This blog post was updated on October 15, 2023.