The Social Stack for New Social Contracts

By John Henry Clippinger and David Bollier

The emergence of a new “social layer” of services, software and social norms on the Internet – the Social Stack – has enormous, far-reaching promise for the design of new kinds of social and economic institutions.

Just as the Industrial Revolution redefined our relationship to Nature and to one another, so too the New Sciences and open network platforms are enabling diverse peoples from around the world to invent new digital/physical hybrid expressions of themselves.  The new social networks, communities, movements, enterprises and institutions now arising are fundamentally changing how the human species is shaping the world.  It is also resulting in new forms of bottom-up human agency, social coordination and innovation.  In its largest sense, open platforms are giving rise to new forms of flexible, self-defining and self-organizing societies.  We have entered an era in which the coordination costs for collaboration are nearing zero and unforeseen opportunities for collective action at enormous scales are now tenable.

Seen through the lens of evolutionary biology, the emerging Social Stack is a significant  evolutionary threshold. It is introducing new “fitness functions” for the evolution of human norms, behaviors and institutions.  This in turn is opening up new “solution spaces” for solving intractable problems at all levels of society.  Design innovation has reached a point that it is now entirely practical to design and build powerful new sorts of global social ecosystems.

These new digital environments are capable of organizing social trust and collaboration in highly constructive ways.  They can unlock knowledge and opportunity in previously unimaginable ways.  Yet because these very technologies come freighted with worrisome surveillance and data-mining powers, they also require vigilance against an alternative dark, dystopian future.

ID^3 believes that an integrated Social Stack could have historic implications for governance, economics, institutional structures, social organization and much else.  It would enable people to develop new sets of norms as networks or communities on the Internet, providing them with powerful, practical new tools to manage community production and exchange.  This could radically (if indirectly) affect all levels of society.

The Social Stack:  A New Approach to Governance and Institutional Design

What exactly is the Social Stack?  It consists of five layers of social technologies, each of which deals with distinct challenges in securing and sharing personal private data in controlled digital contexts.  The five layers deal respectively with Core Identity; Identity Management and Authentication; Trust Frameworks; Core Services; and Applications. 

The purpose of the Social Stack is to help establish distributed systems to manage personal identity on open platforms.  Together, the five layers can enable trustworthy forms of collaboration, exchange and governance of resources.  These technologies are already actively developing and starting to coalesce fitfully into a more integrated, open software platform.  It is a process that ID^3 is actively facilitating in a number of demonstration projects.

The basic goal of the Social Stack is to enable people to develop trusted online social and commercial relationships that can persist and scale.  This capacity depends upon people being able to control their own personal information.  They must also be able to efficiently authenticate other people’s identities based on self-selected criteria for mutual association, trust and risk.

If equipped with the proper tools, distributed networks and groups could allocate their resources and privileges among their participant-members as they see fit.  The Social Stack would enable sustainable, bottom-up forms of governance to take root and grow.  The system could be used to advance commerce, civic engagement, social purposes or non-market provisioning.

In this sense, the Social Stack has sweeping implications for political governance in both theoretical and practical terms.  It could transform the role of the State, by empowering citizens to devise new forms of self-actualized institutions that exhibit greater social legitimacy, efficacy and adaptability than governments.  As a technical and political matter, the Social Stack would not consist of a single, monolithic set of protocols and software systems, but rather an evolving plurality of approaches animated by users themselves.  It would also be completely decentralized and open source, and so the platform could not be “captured” by any single player or group and would always be capable of evolving and innovating.

Ever since Hobbes proposed the State as the only viable alternative to the dread state of nature, citizens have entered into a notional “social contract” with “the Leviathan” to protect their safety and basic rights.  But what if networked technologies using the Social Stack could enable individuals to negotiate a very different sort of social contract (or contracts)?  What if digital systems enabled people to band together into quasi-autonomous governance units for mutual protection and provisioning without resorting to government while reaping superior forms of services and protection?

There is good reason to believe that the Social Stack could indeed help people overcome classic collective-action problems such as poorly crafted rules, inadequate enforcement, weak sanctions, etc.  Instead of having to look to the State, it is possible to imagine alternative governance institutions that could be more effective, at least for many important commercial, civic and social needs.  People could be empowered to control their personal information and identities, and to develop bonds of social trust and reputation in stable, enduring online communities.

Equipped with these capacities, people could feasibly undertake cooperative endeavors and commerce at scales and intensities previously impossible.  Anyone could theoretically use the Social Stack to become an authoritative, secure and seamless guarantor of identity on global networks.  They could develop sophisticated, value-based ecosystems animated by their own bottom-up interests (“pull”) rather than by top-down, seller-oriented power and marketing (“push”).

The new institutional forms in networked ecosystems would generally be more efficient, flexible and responsive than their conventional equivalents.  They would open the door for greater experimentation and the evolution of new types of institutional systems and social practices that express the collective will of a group.  Like the diversity of a gene pool, the Social Stack would help catalyze innovation and practical improvements in institutional behavior.

How would the Social Stack work in technical, operational terms?  Let us now review its five layers to show how they would enable the democratization of identity management and authentication and enable an explosion in institutional innovation and effectiveness.

1.  Core Identity:  Sovereignty Over Individual Identities

The first, most important element of the Social Stack is designing the architecture for protecting individual and group identity.  The essential question is, Who or what authority has the right to assert to a third party who I am?  That is, who gets to determine the nature of my relationships to others and my eligibility for rights and duties associated with belonging to a larger collective?  Who is the guarantor of my “Core Identity”?

If individuals were to claim the right to manage their own identities, the first objection might be:  Why should anyone believe it?  Doesn’t my identity need to be backed by “an authority” in order for other people to “trust” it as “authentic”?

The problem is, the State acting as authenticator is a power is subject to abuse.  There is always a question of how far we can trust its authority.  That is why the State should be entitled to only the minimum amount of information it needs to make a specific determination. That is why National Identity Cards and other “universal identifiers” are so problematic; they can collect and share far more information than is needed for a specific, legitimate purpose (such as determining my entitlements to public schooling, legal rights and personal protections).  Significantly, it is now possible (using complicated algorithmic means) to make specific determinations without necessarily acquiring actual information from another party.

The implied power of governments to act as the sole identity provider for all aspects of our identities might have made sense in earlier times.  But in an era when so much of everyday life is now recorded in digital data, and such information is aggregated into vast databases that can be used for continuous surveillance, relying upon governments as a sole authenticator of identity represents a profound danger.  The concentration of “issuing authority” to a few, privileged institutions, whether public or private, inevitably leads to new concentrations of power and to Orwellian intrusions into every aspect of one’s life.

The most secure way to protect our digital identities is for people to have multiple “personas,” each of which has its own criteria and certificate for qualification.  The basic idea is to deliberately Balkanize our digital personas among diverse identity providers as a powerful way to protect the integrity of our personal information.

Using digital technologies, it is now entirely feasible to rely upon a combination of biological, behavioral and knowledge-based “signatures” and encrypted distributed algorithms to authenticate each of us as a unique biological/physical person and to blindly assign an access privilege – a process already used by the BitTorrent file-distribution software and by “crypto-currencies” such as Bitcoin.

The means to establish our biophysical identities do not require state authentication; we contend it can now be algorithmically established with extremely high levels of reliability.  The identity signature or signatures generated by algorithmic processes could be encrypted and stored in a “global commons cloud” accessible only to individuals themselves.  Think of this algorithmic authentication as a really rigorous birth certificate that could then be relied upon to certify other identities or persona that people would use for the purposes of citizenship, healthcare, social media, finance, etc.  Each such identity would have its own criteria and scope of authentication – but in each case only the individual would know that all these different identities resolve to a single biophysical entity, the “Core Identity.”

The point is that individuals need not have to surrender control over their Core Identities as part of one’s social contract with the State.  It is now possible for one’s root identity, about one’s biological existence (what some people awkwardly call the “data subject”), to always be within our control.  This could help prevent government identity systems from becoming a too-convenient instrument for tracking other data streams about us.

2.  Open Identity Management and Authentication

When the Internet was developed by DARPA over 40 years ago little thought was given to having a rigorous authentication processes.  Username and passwords seemed to work and later certificates and PKI and other security measures were introduced.  But authentication was not designed in.  Hence, we are stuck with a lack of security and the ungainly process of password controls and relentless logins. The problem was recognized early on and yet there has been no systemic solution.

At present, the business of authentication is reserved for only highly accredited “identity providers” such as financial service vendors, governments, credit bureaus, and the like. Moreover, the information required to complete an identity check is hard to come by and hence, the process of providing high level identity and “claims” checks is timely and expensive.

But this is changing.  A slew of new encryption technologies – distributed storage and processing; cloaking that can defeat unwanted linking and “sniffing”; zero knowledge proof techniques; perishable IDs and passwords; data mining of mobile data to develop unique identity and behavioral signatures; and more – are making security and authentication technologies much stronger, cheaper and ubiquitous.  Using such technologies, it is possible to design identity systems that would allow virtually anyone anywhere to become an identity provider, and thereby to dynamically allocate and revoke identity tokens and privileges.  The unquestioned need for centralized institutions to act as authorizing sources and identity providers is rapidly disappearing.

While a future of dynamically allocated and revoked rights and privileges may seem unsettling and chaotic to some, such a future need not be chaotic if the metrics for trust and risk reduction were truly effective and robust. Rather than being hampered by credentials that are static, coarse-grained, inappropriate and out-of-date, self-authenticating networks make it possible to constantly adjust credentials and metrics to provide the greater “social liquidity” of data at the lowest risk. In doing so, credentialing systems can be made less uncertain and arbitrary, and hence, more responsive and trustworthy, than traditional hierarchical credentialing institutions.  Given the failure of current institutions, this kind of social technology could play an enormous role in implementing new social contracts that are more responsive, flexible and socially respected.  If institutions are going to adapt to the complexities of the 21st century and beyond, these sorts of features will be very much needed.

3.  Trust Frameworks:  Tools for Building New Social Contracts and Digital Institutions

The two layers of the Social Stack that we have just described – Core Identity and Identity Management and Authentication – provide the technical predicate for the more complex task of enabling new forms of group coordination and community-building.  This is the Trust Framework, a social/technical system for creating new sorts of consensual social contracts among people.

The essential point of a Trust Framework is to help citizens and consumers reassert greater control over their online transactions and prevent abuses of their personal data.  Yet the Trust Framework still enables personal data to circulate and be used, and indeed, to stimulate innovations in technology and business models.  The goal is not to lock up or destroy personal data but to let it flow and generate value.  But….the Trust Framework is also designed to let any data flows and value creation occur in ways that are truly accountable to the will and interests of the people generating that data.

To the conventional mindset, this appears to be nonsensical:  Let data circulate but also keep it private?!  But that is precisely what a Trust Framework is intended to do.  It is not simply software but a combination of legal agreements and mechanisms for a given social network to define, express, and enforce the rights and duties of its members.  It is in effect a method for designing, operationalizing and testing new kinds social contracts.

While the idea of trust may seem elusive, it can in fact be defined with some rigor.  Trust is the expectation that someone or something is what he, she or it purports to be, and acts and performs accordingly. Trust can be verifiable in that all representations about identity – and claims about the attributes of a person or artifact – can be tested and attested for. This is in large part what user logins and passwords are intended to do.  The problem is that representations of identity can break down, become dated, or be compromised or hijacked altogether.  There must be some oversight processes, rules and mechanisms to ensure that expected standards of trust are actually sustained and enforced.

In the physical world there are rules, audits and oversight bodies that act to ensure that tacit and explicit contracts of trust are adhered to.  This is typically accomplished through the threat of litigation and massive liability exposures, or through oversight bodies that ensure compliance with general policies and adjudicate disputes. One problem with this approach is that such bodies can it be easily compromised and become captive to special interests. Another is that they are often too slow, ill informed and unresponsive to sustain real trust.

Reliance on government sanctions or formal legal systems to enforce a standard of conduct invites “gaming the system” to see what a company or individual can get away with.  If the upside gains are attractive enough, it can be entirely rational to cut corners, make disingenuous feints and in other ways skirt the law.  This is inevitable when compliance is enforced by an external, formal authority.

The Trust Framework seeks to remedy these deficiencies by internalizing accountability and compliance mechanisms into the process of governance itself.  That is, feedback and control mechanisms are built into the trust network in order to anticipate failures in privacy, security, performance, and to respond as necessary with corrective measures.  In this fashion, the system can mitigate risks based on evolving metrics and measurements.  Rather than seeing institutional failure as a kind of moral or statutory flaw requiring external sanctions, legislation or litigation to rectify, the Trust Framework seeks to use network systems to continuously learn from and correct failures. There is no effort to externalize and offload risks onto others, or to look to outside legal authorities such as government or the court system to enforce rules and norms.

A Trust Framework, then, contains its own internal mechanisms to anticipate and prevent breaches of privacy, security and performance.  It is designed to ensure open feedback from community members and to administer punishments through consensually devised mechanisms.  In this new scenario, trust is not established by the enactment of another law or the presence of a regulatory enforcement system.  Trust is established only if members of a community believe in the legitimacy of the system and experience its efficacy in implementing community norms.

4.  Core Utility – Public Good Services

One of the challenges for the designer of an operating system is to determine what should be the core utilities and what should be the applications.  History shows that over time more and more applications are folded into or “bundled” into the proprietary operating system as utilities, thereby reinforcing the monopoly power of the operating system.

If the operating system or platform is open, the challenge then is what services should be left open source, which services should be considered proprietary and which should permit monetization. These are important strategic design considerations that can affect the success and growth of an ecosystem.  As many services become stable, ubiquitous and inexpensive, technological innovation will start to turn them into commodities.  This process needs to be supported so that older services can be treated as core utility services – and so new proprietary innovations can shift toward providing new value-added services, and not just profiting from monopoly control over yesterday’s innovations.

The commoditization of services is already happening at a breathtaking speed with the “open sourcing” of basic cloud services software.  It is also affecting software for analytics, registration, machine learning, security and even hardware.  Many of these infrastructure services have limited value in and of themselves and derive value only though their operational roles in a larger ecosystem of services.  Certainly identity and authorization services fit this description, and therefore should be treated as an open, public good.  As the costs of open authentication and security services declines, payment services and market-discovery and -making services and exchanges are also becoming core services

 5.  Trusted Data-Driven Applications.

There is no reason that a business model could not be based on the protocols and tools of the Social Stack.  We see the bulk of rapid innovation and revenue being generated by high value data-driven apps.  Such services would be registered into the Trust Framework and pay a percentage to it; the precise fee could be set at the discretion of the particular trust framework. By allowing “permissioned and secure” data mining of people’s personal data — financial, purchasing, health, locational, recreational, travel and other information – highly predictive models could be built that benefit not only the individual, but also vendors of services and digital institutions such as schools and healthcare.

Importantly, individuals would be able to use discovery services to find and organize others like themselves to create their own social, cooperative forms of buying. This capability could be in the form of “reverse auctions” or request for proposals (RFPs) and most certainly would mark a significant part of the “big shift” envisioned by John Seely Brown and John Hagel in the transition to an “edge-based, pull economy.”

In such an economy there really is no need for advertisers and data brokers in the traditional sense because companies could acquire, aggregate and analyze data much more efficiently through trust frameworks.  Companies could use highly targeted marketing and “permissioned offers,” and consumers could initiate and consummate their own exchanges with a variety of competing suppliers. When open APIs are integrated with “trust wrappers” – modular, client-side apps that work with Trust Frameworks – it would signal the beginning of a new sort of “Open API Economy.”  Vendors and consumers alike would be able to acquire and mash up data, and to make or screen offers according to highly specific consumer criteria. In collaboration with Scott David of the University of Washington Law School, we have developed a draft legal framework for Social Stack. We expect that such a legal framework could act as evolving model for other frameworks consistent with the “safe harbor” provisions advocated by the Obama Administration in their various consumer data protection and trust framework policies.

Looking to the Future

It is too early to predict how the proliferation of Trust Frameworks on the Internet will affect existing institutional forms, especially the market and state as now constituted.  But it seems likely that the Social Stack would provide a versatile infrastructure for more efficient, effective, distributed governance of all sorts of resources. To the extent that the Social Stack would help address some of the deep, structural failures in institutional design and governance, we could anticipate new disruptions – but also a powerful outpouring of new streams of engagement, creativity and social and civic reconstruction.

© ID3, 2012, licensed under a Creative Commons Attribution-ShareAlike 3.0 license.