by
Block Party
April 19, 2023
Did we have the wrong takeaways from Cambridge Analytica?
On the surface, many of them seem right: Using nefariously acquired data to swing elections is bad. Consumers should have rights to control how their data is used online. Platforms should take a more active role in safeguarding their users’ privacy. And the results — global legislation to help secure those rights; increased calls for platform transparency and responsibility — are directionally promising.
But in the fallout from the scandal, someone decided that allowing third-party applications any access to user data is too dangerous. And with that, in trying to bluntly solve the problem of bad faith data harvesting, large social media companies ended up killing the best solution to a huge set of social media safety issues.
There’s a way to unlock the potential of a third-party app ecosystem focused on user control and safety, while still mitigating the potential risks of allowing access to the necessary user data to enable such tools in practice. The answer lies in a little-known concept first introduced in the landmark California Consumer Privacy Act: the “authorized agent”.
In this piece, we’ll discuss:
Let’s start with a fundamental question faced by all big consumer platforms: who has the most context on what a specific user wants? After all, the largest platforms need to figure out how to serve relevant—and safe—content to billions of users. And none of those billions of users have the same desires or boundaries. This complicates platform ability to provide good algorithmic recommendations. (It’s also a reason platforms sometimes claim they cannot do too much proactive content moderation: because they lack the context to effectively evaluate harm in ambiguous social circumstances.)
One way to help address the scenario where the law or the platform Terms of Service don’t disallow content but it doesn’t match a user’s own preferences is to give the user more direct control over what they do and don’t see. But in practice, most people don’t want to have to see bad content in order to decide they don’t want to see it. They just want to set some preferences and not see the unwanted content in the first place. Given the wide range of different preferences you might run into globally, it’s not surprising that platforms haven’t built tools to directly enable this type of fine tuning. Those tools require culture- and region-specific context the platforms aren’t in the business of knowing.
This gap, between what the platform can provide in terms of customization, and what the user actually wants to experience, is where an authorized agent can help.
High level, an “authorized agent” is an intermediary (such as a person, or more likely, an app) that can take actions on behalf of a specific consumer, at their explicit direction. Having an authorized agent is not so different from having an agent in other parts of your life: your talent agent can negotiate deals on your behalf, your real estate agent can help you sell your house, or your lawyer can sign contracts on your behalf.
Critically, unlike the platforms, the authorized agent has meaningful context on your desires as a user, because it can only act according to your specific requests.
There is a lot you might want to delegate in your digital life. You might, for instance, want an authorized agent to review all your social media mentions and hide the abusive ones so you don’t feel the mental health toll of experiencing online harassment directly. Or you might want to remove content from your timeline that doesn’t meet your personal standards, even if it’s acceptable according to the law or the platform Terms of Service.
In the state of California you already have the right to an authorized agent to manage some aspects of how your data is used on social media platforms. It’s written into the California Consumer Privacy Act. But in practice, there are meaningful barriers to acting as an authorized agent.
Large social media platforms need to give authorized agents access to take action on behalf of users. To keep up with the speed and scale of activities that technology enables, that access needs to be programmatic. Or, in other words, authorized agents need access to APIs. The alternative — a user providing login credentials to their account directly, or literally handing over their phone — is all or nothing, and far harder to audit or meaningfully safeguard.
Sure, Meta and other large platforms offer some APIs today. They’ve maintained or even expanded those that focus on sharing content to their platforms, creating new content, and better managing advertising. But notably missing is the type of access that would allow a consumer or their agent to assert meaningful control.
What is an API?
APIs, or application programming interfaces, are how different apps talk to each other. With an API, one can request or send data or instructions. So if you’re building a tool to automatically mute any accounts that have no profile picture and @ mention a user, you’d use the API to:
APIs are general purpose tools, and they get used in a wide range of ways. So when we talk about API access, it’s critical to go a step deeper and describe which access we mean. Too much access, and abuse is trivial; too little, and it’s impossible to adequately safeguard user privacy. So what do you actually need in order to act as an authorized agent?
All the info the consumer can access (that they allow you to see)
This includes anything that might show up in a user’s everyday usage of the application. It’s things like the username, description, and profile picture of accounts the user follows, or the content of public posts that mention a user. If the user could click on a profile or their timeline and see the information, their authorized agent should have access to it, too.
That also means that if a user doesn’t have access, their authorized agent should not either.
Let’s say someone you follow has QTed a viral Tweet to dunk on it— but you can’t see the original, just an error message stating that the original Tweeter limits who is able to see their content. Your authorized agent should not be able to bypass that error on your behalf. The authorized agent should only get access to the information because the consumer has access to it.
It’s important to note that this access could also include the contents of a DM, or the profile information of a person with a locked account who has accepted a follow request from our user. As the team at the Initiative for Digital Public Infrastructure put it recently,
“Some may argue that the people who appear in the user’s feed did not consent to their content being sent to a third-party client. We believe this situation is addressed by the idea of contextual privacy: by allowing someone to subscribe to your content, you implicitly trust them to handle your content with care. If they don’t, you can block them. When you allow someone to follow you on Instagram or Twitter, you can’t prevent them from screenshotting your posts. By allowing them to follow you, you’re signaling that you trust them with your content or don’t care what they do with your content. If they screenshot and share posts in a way that breaches your trust, you can and should block them.”
The caveat on all of this is that the consumer should choose affirmatively to give access to this information. Perhaps they don’t want their authorized agent to access their DMs, or information about accounts that have locked profiles, for example. They should be able to opt out of sharing that information, even if it limits what the authorized agent can do on their behalf.
Ability to take actions on behalf of the user (that they allow you to take)
Similarly, if a consumer can take an action, they should be able to delegate that action to their authorized agent. That could mean things like blocking, muting, updating privacy settings, filtering DM requests, untagging, deleting old posts, and more. Once again, authorized agents should be required to obtain specific permission to take these different actions on the behalf of the consumer.
Note that this model of access doesn’t enable general purpose scraping of platform content, or require access to anything proprietary. While of course the platform could choose to make available proprietary information, such as their internal estimation of whether an account is engaging in coordinated inauthentic behavior, this needn’t be a requirement. Authorized agents don’t need to understand how the platform algorithm made its decisions about what content to show the user, for example. They don’t need access to the entire social graph. They just need what a user can actually see, and what actions they can personally take.
What does this mean for researcher access?
This model of API access is very different from the type researchers need in order to study platform behavior. Research access is important for transparency and accountability reasons, not in order to enable users to have more control over their experience. Where researchers require the ability to programmatically access large, anonymized datasets that offer a representative sample of activity and users across the whole network, authorized agents just need the footprint around a specific consumer.
What about bad actors or bad faith actions by the platforms?
A sufficiently motivated bad actor could use the access afforded to authorized agents to do all sorts of bad things— so what safeguards does this model provide? The short answer is that the narrowness of the authorization combined with the opportunity for greater transparency and enforcement mechanisms at both the platform and regulatory level provides more powerful guardrails than existed for previous incarnations of a third party ecosystem.
If you’d like to dig into the specifics, check out our deep dive on this topic.
One might ask another question: what if the platforms use this as a way to ban totally reasonable safety applications?
The best defense against bad faith usage of the rules put in place to protect consumers is more prescriptive regulation around bannable offenses. The short list of legitimate reasons to deny applications might include issues like privacy abuse or national security concerns. Transparency, in turn, can make sure that this guidance is respected: The platforms should publish their track record of banning authorized agents, including the timeline for the ban and reasons it was applied.
Although it will require some investment from the platforms up front, ultimately introducing authorized agents is a solution that helps both consumers and platforms alike. By providing a foundation for the ecosystem that is rooted in consumer choice, the system will recalibrate to better accommodate the specific and unique needs of each individual. Platforms will be able to devote their safety resources to handling obviously abhorrent Terms of Service violations instead of litigating ambiguous content moderation edge cases or trying to stay neutral amidst complex political battles over freedom of speech. And everyone will get to define their own boundaries for themselves.
The authorized agent approach isn’t just a nice-to-have, or a someday solution. In places like California, there is already a legislative mandate, and consumer pressure is mounting too.
Interested in supporting our effort to build this essential infrastructure for the internet? Policymakers would like to hear from you. We’re eager to connect with anyone who is passionate about this topic, but your perspective will be especially helpful to share with policymakers if you:
Get in touch: [email protected]