by
Block Party
April 19, 2023
In our introduction to authorized agents, we argued that this model not only expands consumer control and choice in the social media ecosystem, but offers stronger safeguards against bad actors. So how does that work in practice? The answer lies in both the unique restrictions on authorized agent behavior, and some critical updates to the enforcement mechanisms used to police the ecosystem.
Fundamentally, authorized agents are only allowed to take action at the explicit direction of users. That means that they cannot aggregate data collected on behalf of multiple users unless they’ve been told to do so by those users specifically. This provides important guardrails while leaving room for authorized agents to offer useful alternatives to platform defaults. For example, an authorized agent might offer recommendations of content or accounts to follow using a different set of criteria than the platform does, built off of information that a group of consumers explicitly opts to allow the authorized agent to use for that specific purpose.
It also means that authorized agents cannot sell consumer data without explicit permission. When combined with the first limitation, this precludes the attack vector used by Cambridge Analytica. (They requested substantially more information than necessary from the Facebook API about the entire social graph of their users, under the guise of making cute surveys, used that data to create psychographic profiles of those millions of consumers without their knowledge, and then generated highly custom political advertisements based on those profiles.)
This should go without saying, but to be clear: use of an authorized agent also does not allow a consumer to break the Terms of Service of the platform, or any laws. A consumer should not, for example, be able to direct an authorized agent to programmatically spam other consumers. Any attempts at such activity should be stopped by the platform, per their existing rules.
In the case of Cambridge Analytica, auditing or enforcement of rules for third-party apps were minimal, which is how the scheme was able to continue for so long without detection. Clearly, in order to meaningfully curtail this type of behavior, we’ll need more robust options for consumers and Attorneys General.
First, platforms need to explicitly disallow these types of behavior in their Terms of Service, and make violation of these requirements grounds for an authorized agent to be banned from a platform. To make this safeguard meaningful, however, platforms need to implement stricter screening processes for authorized agents, and adequately resource the teams tasked with overseeing the ecosystem. Fortunately, the scope of this issue—reviewing developer applications— is substantially smaller than e.g. content moderation for an entire platform, and it’s a problem the industry has proven capable of handling. Many companies, Apple and Salesforce included, have developed robust third-party ecosystems with mechanisms for filtering out bad actors. There’s a playbook to follow here.
Finally, enforcement by platforms must be augmented by meaningful legal recourse for consumers, backed up in the US by state Attorneys Generals empowered to investigate and hold accountable both third parties acting in bad faith and platforms that fail to detect or remove them. In the state of California, where the right to an authorized agent is already established, authorized agents by law cannot use collected data other than to fulfill consumer requests, for verification purposes, or for fraud prevention. Data usage that is not explicitly requested by a consumer is a violation of California privacy laws, and consumers can and should be able to sue if it happens. In the United States, state Attorneys General could also play a role in investigating and ultimately prosecuting bad faith actors who violate their agreements with consumers in this manner.
There are myriad examples within the tech industry of vibrant ecosystems that not only enable consumer choice, but deepen the value of the platforms themselves too. Social media companies have embraced this model for their advertiser customers; the specter of historical privacy missteps should not preclude this option for their consumer users. And with the right partnership between regulators and platforms, safety and choice need not pose unacceptable tradeoffs any longer.
Want to help join the fight to bring authorized agents to the social media ecosystem? Drop us a line at [email protected]. Policymakers would like to hear from you.