This piece originally appeared in Tech Policy Press.
As the case for social media “middleware” continues to gain support among scholars, policymakers, and developers – as a way to increase “user control over what information is received” and thus to improve the quality of online discourse while protecting freedom of expression – our understanding of related concerns and how to overcome them has also advanced. A recent article in Tech Policy Press by the Initiative for Digital Public Infrastructure’s Chand Rajendra-Nicolucci and Ethan Zuckerman builds on Cornell scholar Helen Nissenbaum’s argument that privacy is not as simplistic a binary of personal ownership as many presume, but depends on context – to help cut through the concerns that such agents could sacrifice privacy that were cited by Stanford Cyber Policy Center’s Daphne Keller in 2021. “Contextual integrity” is the idea that privacy is a nuanced matter of social norms, in specific contexts, governing just what should be shared with whom, for what uses.
To complement and expand on Rajendra-Nicolluci and Zuckerman’s article, I draw attention to further insights from Keller later that year, and to a solution architecture that I proposed in response. Those further comments and suggestions add a layer of architectural structure to managing those privacy issues in context. The core idea is that social media data and metadata is best managed through an architecture that is highly protective of contextual privacy by breaking the problem down to multiple levels.
Most discussion of middleware considers only a single level of it. An open market for “attention agent” middleware services must offer wide diversity, and so must be open to lightweight service providers. That potentially makes it hard to ensure that privacy can be protected. But the addition of a second, and more tightly controlled data intermediary layer between the platform service and the attention agent service can ensure tighter control of privacy. A body of work on data intermediaries, cooperatives, and fiduciaries supports such a strategy. Tightly controlled data intermediaries can support lightweight and less tightly controlled attention intermediaries by limiting how data is released, or by requiring that the algorithms be sent “to” the data, rather than the other way around. Such data intermediaries can also potentially help limit abuses of personal data by the platforms.