< Articles

Why Privacy Matters

If you’re passionate about democracy, you’re passionate about privacy. We all must become active privacy advocates with some urgency or risk losing real democracy forever.

Sep 9, 2024 • 29 mins

Author: Angus Mackay

TL;DR

I picked up Neil Richard’s book Why Privacy Matters because I had an interest in learning more about privacy: it’s past, present and future. I brought to the endeavour a bunch of preconceived notions about privacy that had been formed through exposure to popular media. None of us have time to be experts on many things so we all fall into this trap. My favourite was that “you should only be concerned about your privacy if you’ve done something wrong or you have something to hide”. I cringe now when writing that. If you don’t have time to read the book and democracy is absolutely a non-negotiable in your life, this article is for you. Otherwise, it’s a great read that I can’t recommend highly enough.

We’re living in a time where rapid advances are being made in AI seemingly every other month and in some cases faster. Like so many ground breaking technologies that have come before it, AI has the potential to be humanity’s saviour and it’s undoing. We’re seeing devices now like Meta’s smart glasses, Humane’s lapel pin and Rewind’s pendant (before a name change) that see, hear and record everything we do in our lives. Whilst this first generation of products are likely to come and go, they undoubtedly set the tone for things to come. The AI agents behind these devices will soon be pervasive and they’ll have the potential to nudge, cajole and dictate our most important actions. When we cross this nexus, the idea of democracy will remain just that, and any plausible opportunity to change our fate will already be lost.

How to Think About Privacy

What Privacy Is

There’s a lot of definitions of privacy. Privacy scholar Daniel Solove puts it well when he argues that “privacy is a concept in disarray”. Solove started a book to write a definition but gave up after a vast amount of research, explaining frankly:

“After delving into the question I was humbled by it. I could not reach a satisfactory answer. This struggle ultimately made me realise that privacy is a plurality of different things and that the quest for a single essence of privacy leads to a dead end. There is no overarching conception of privacy — it must be mapped like terrain, by painstakingly studying the landscape”.

The good news is that we don’t actually need a precise, universally accepted definition of privacy. Lots of things we care about lack precise definitions, but we still manage to protect them anyway, like equality and freedom of expression. So we have a place to start though:

Privacy is the degree to which human information is neither known nor used.

In the US, the Video Privacy Protection Act of 1988 refers to human information as “personally identifiable information” whereas Europe’s GDPR applies to “personal data” which it defines as “any information relating to an identified or identifiable natural person.” The drawback of using terms like “data”, “personal data” and “users” is it distances us from what’s really at stake in discussions on privacy – human beings.

All too often popular and legal conversations about privacy stop the moment our human information is collected. Solove has termed this the “secrecy paradigm” — the idea privacy is only about keeping things hidden, and information exposed to another person ceases to be private. This is why our law has imposed duties of confidentiality — in some cases for centuries — on a wide variety of actors, including doctors, lawyers, accountants, and those who agree to be bound by a contract of nondisclosure. It’s why the privacy theorist Helen Nissenbaum argues:

“when we think about privacy, it’s best to focus on whether the flow of information is appropriate in a particular context.”

Privacy is a continuum rather than a binary on/off state. Unfortunately American law does not always reflect this commonsense understanding of how human information circulates.

A Theory of Privacy as Rules

In 2002 Target Corporation discovered a host of reliable lead indicators from buying behaviour and other data they had purchased that told them when a customer was pregnant, even if they didn’t want them to know. The aggregated insights led to a “pregnancy prediction” score that even allowed them to guess each woman’s due date with surprising precision. It’s a good example of how big data can be used to find surprising insights using seemingly innocuous information, and how “creepy” that can be. The real lesson here though, is the power those insights confer to control human behaviour.

Target wanted to get to consumers who were expecting to shop at Target before its competitors and to habituate them as Target shoppers before their buying preferences readjusted to new habits — with Target as one of those habits. Consumers who became aware of it responded negatively, with reactions ranging from feeling “queasy” to actual anger. Target’s reaction was to start mixing in all these ads for things they knew pregnant women would never buy, so the baby ads looked random. They found as long as a pregnant woman thinks she hasn’t been spied on, she’ll use the coupons. It’s fair to say that Target has perfected it’s targeting far beyond what was possible more than 20 years ago. Just imagine what will be possible in the age of artificial intelligence. This illustrates why it’s helpful to think about privacy in terms of our inevitable need for human information rules that restrain power to promote human values.

My theory of privacy as rules has four elements:

  1. “Privacy” is fundamentally about power.
  2. Struggles over “privacy” are in reality struggles over the rules that constrain the power that human information confers.
  3. “Privacy” rules of some sort are inevitable.
  4. So “privacy” should be thought of in instrumental terms to promote human values.

Privacy & Power

Human information is powering algorithms in the service of established social, economic, and political power, often in connection with applied behavioural science or other forms of social, economic, or political control. Privacy is about power because information is power, and information gives you the power to control other people.

Julie Cohen has explained how digital platforms operating within what she calls “informational capitalism” treat human beings as the raw material for the extraction of data as through a process of surveillance under which the subjects become conditioned as willing providers of the data. The Canadian legal theorist Lisa Austin has been even more blunt, arguing that a lack of privacy can be used by governments and businesses to enhance their power to influence and manipulate citizens and customers, whether by changing behaviour or by manufacturing “consent” to dense sets of terms and conditions or privacy policies.

The discipline of behavioural economics became popularised by Nudge (2008), the best-selling book by Thaler and legal scholar Cass Sunstein. Their key insight was human decisions can be influenced by the ways the structure of decisions affect our cognitive biases. They helpfully coined the term “choice architecture” to indicate that the conditions under which we make choices affect those choices and that those conditions are subject to manipulation by others. They optimistically hoped that choice architects would encourage human well-being, welfare and happiness on average, while maintaining the ability of choosers to opt out of choices they didn’t believe advanced their individual preferences over the alternatives.

A classic example of this shift was the Facebook game FarmVille. Launching in June 2009, Zynga’s social game put its players in control of a small virtual farm. At its peak FarmVille boasted 85 million players. Zynga created feedback and activity loops that made it hard for players to escape. As one journalist describes it succinctly, “If you didn’t check in every day, your crops would wither and die; some players would set alarms so they wouldn’t forget. If you needed help, you could spend real money or send requests to your Facebook friends.” It was ultimately abandoned by players who came to realize that they were not the farmers but rather the crop itself — it was their own time, attention, and money being farmed by Zynga.

Zynga were a pioneer in the use of embedding cognitive tricks in design to create a deliberately engaging (“addictive”) product. Other designers were watching and taking notes, and FarmVille’s cognitive tricks — a subset of what are known as “dark patterns” — spread throughout the internet. Indeed, as the New York Times reported on the occasion of the game finally shutting down on New Year’s Eve 2020, “FarmVille once took over Facebook. Now everything is FarmVille.”

Privacy as Rules

In the absence of comprehensive privacy rules to protect consumers, the American approach to privacy has been “sectoral.” Under this system, some types of information, like health, financial, and movie rental data, are covered by individual laws, but no general privacy law fills the holes and provides a baseline of protection like the GDPR does in Europe. The general rule of American privacy has been called “notice and choice.” Since the dawn of the internet, as long as tech companies give us “notice and choice” of their data practices, they are judged to be in compliance with the law. In practice, the “notice” consumers get is little more than vague terms hidden in the bewildering fine print of privacy policies. Moreover, virtually no one reads these privacy policies — a fact documented by a vast academic literature.

The “choice” half of “notice and choice” is an equally dangerous fiction, particularly when the “choice” we are being presented with is essentially the “choice” of whether or not to participate in the digital world. This terrible general rule has forced consumers into what Solove called “Privacy Self-Management.” This sets the defaults of human information collection in favour of powerful companies, and then causes consumers to feel guilty and morally culpable for their loss of privacy by failing to win a game that is intentionally rigged against them. We blame ourselves because the design that is nudging us in that direction is often invisible or seemingly apolitical. Notice and choice is thus an elaborate trap, and we are all caught in it.

It gets worse. Even good rules can be co-opted in practice, and institutions can be highly resourceful in watering down or even subverting rules intended to constrain them. Detailed sociological fieldwork by Ari Waldman has revealed the ways in which corporate structures and cultures, organisational design, and professional incentives are often deployed by companies to thwart both the intentions of privacy professionals on the ground as well as the spirit of the legal rules and privacy values those professionals attempt to advance.

To routinise surveillance, executives in the information industry use the weapons of coercive bureaucracies to control privacy discourse, law, and design. This works in two ways: it inculcates anti-privacy norms and practices from above and amplifies anti-privacy norms and practices from within. Tech companies inculcate corporate-friendly definitions of privacy. They undermine privacy law by recasting the laws’ requirements to suit their interests. And they constrain what designers can do, making it difficult for privacy to make inroads in design. As this happens, corporate-friendly discourses and practices become normalized as ordinary and common sense among information industry employees. This creates a system of power that is perpetuated by armies of workers who may earnestly think they’re doing some good, but remain blind to the ways their work serves surveillant ends.

Edward Snowden has explained at length how he witnessed other intelligence community personnel who spoke up internally about surveillance abuses face professional marginalization, harassment, and even legal consequences — convincing him that the only way to address those abuses was to go to the press with documentary evidence.

Cohen goes even further, explaining how powerful institutions are able to shape not just their own organizations but the basic structure of our political and legal language to capture rules and subordinate them to their own purposes through the ideology of neoliberalism. This can happen, for example, by reinterpreting human information as their own valuable property rights or advancing dubious interpretations of the First Amendment like “data is speech” to insulate their information processing from democratically generated state and federal laws that would rein them in.

These intertwined phenomena help to explain why there are so few effective American privacy rules at present, and why efforts to improve them have faced serious challenges in the political process, in the courts, and in practice:

  • Our law governing email privacy — the Electronic Communications Privacy Act (ECPA) — was passed in 1986, long before most people (including members of Congress) had even sent an email.
  • Our law governing computer hacking — the Computer Fraud and Abuse Act (CFAA) — was passed in 1984.
  • Our law protecting movie-watching privacy, the Video Privacy Protection Act (VPPA), was passed in 1988.
  • The Health Insurance Portability and Accountability Act (HIPAA) of 1996 only covers health data from doctors, hospitals and insurance companies and none of the other participants in the modern health system.

The design of computer systems is a source of rules that constrain the users of those systems. Seda Gurses and Joris van Hoboken have shown how the software development kits that platforms allow designers to use to build products are themselves designed in ways that make the protection of privacy in practice very difficult, even where privacy protections are mandated by legal rules.

The Inevitability of Privacy Rules

Many still think of privacy as a binary option of “public” or “private,” when our everyday experiences remind us that virtually all information that matters exists in intermediate states between these two extremes. In reality, virtually all information is and has been in intermediate states between these two extreme poles. Much of the confusion about privacy law over the past few decades has come from the simplistic idea that privacy is a binary, on-or-off state and that once information is shared and consent given, it can no longer be protected.

The law has always protected private information in intermediate states, whether through confidentiality rules like the duties lawyers and doctors owe to clients and patients, evidentiary rules like the ones protecting marital communications, or statutory rules like the federal laws protecting health, financial, communications, and intellectual privacy.

Neither shared private data (nor metadata) should forfeit their ability to be protected merely because they are held in intermediate states. Understanding that shared private information can remain confidential helps us see more clearly how to align our expectations of privacy with the rapidly growing secondary uses of big data.

“Privacy,” in this broader sense, becomes much more than just keeping secrets; it enters the realm of information governance. Privacy is about degrees of knowing and using, and as such it requires an ethical rather than a mathematical approach to the management of information flows.

In her history of privacy in modern America, Sarah Igo documents a variety of privacy struggles over the past century or so, in which some Americans sought to know more about other people, and how the subjects of that scrutiny resisted those attempts to make them known. Igo convincingly argues that these episodes were invariably fights over social status and power in which privacy was the indispensable “mediator of modern social life.” As she puts it well, “Americans never all conceived of privacy in the same way, of course……What remained remarkably consistent, however, was their recourse to privacy as a way of arguing about their society and its pressures on the person.”

And the familiar fault lines of our society — race, class, wealth, gender, religion, and sexuality — were all too often the conduits of those struggles and all too often dictated their winners and losers. Privacy talk in America, then, has long been a conversation about social power, specifically the forms of power that the control and exploitation of human information confers.

Privacy Rules are Instrumental

There’s pretty good anthropological evidence that humans, like many animals, benefit from having private spaces and relationships, and that this benefit is an intrinsic good. European Human Rights Law, as we’ve seen, also treats privacy this way. The European Union’s Charter of Fundamental Rights and Freedoms recognizes fundamental rights to “private and family life, home and communications” (Article 7) and “the protection of personal data concerning [individuals]” (Article 8).

In my experience as well, because privacy is about power, people whose identity or circumstances depart from American society’s (socially constructed) default baseline of white, male, straight, rich, native-born Christians tend to be, on average, more receptive to the privacy-as-an-intrinsic-good argument.

If someone doesn’t believe that privacy is fundamental, pounding the table about its fundamentality is not going to be an effective way of changing their mind. Instead, I’ve found it’s necessary to go deeper than just privacy, to explain that privacy can matter not for its own sake but because it gets us other things that we can all agree are important.

Getting this argument right is particularly significant because international conversations about privacy rules frequently break down with the assertion (commonly made by Europeans) that privacy is a fundamental right that needs no further explanation. To many Americans used to thinking about personal information in economic terms, that argument is bewildering. But at the same time, even if all we care about is economics, some international understanding about what privacy is and why it matters is essential to the economic future of Western democracies.

What Privacy Isn’t

In a time of rapid technological and social change, it’s helpful to think about privacy in relatively broad terms, particularly given the importance of human information to those changes. This is why most of the leading scholarly definitions of privacy are relatively open-ended, like Daniel Solove’s sixteen different conceptions of privacy.

My own definition excludes other ways we could talk about privacy, such as its being a right to control our personal information or the ability to conceal disreputable information about ourselves.

Let’s clear up some misconceptions and myths about privacy. Four of these myths are particularly dangerous:

  • Privacy is about hiding dark secrets, and those with nothing to hide have nothing to fear.
  • Privacy is about creepy things that other people do with your data.
  • Privacy means being able to control how your data is used.
  • Privacy is dying.

Privacy isn’t about Hiding Dark Secrets

Everyone has something to hide, or at least everyone has facts about themselves that they don’t want shared, disclosed, or broadcast indiscriminately. It’s necessary for every member of society to separate themselves from others, and it’s necessary for society to function.

Unwanted disclosure of many kinds of information about ourselves can have deeply harmful consequences to our identity, to our livelihood, to our political freedom, and to our psychological integrity. The law’s intuitive and long-standing protection against blackmail shows that the ability to disclose secrets confers the kind of inappropriate power that the law needs to safeguard against.

Intellectual privacy is particularly important at the present moment in human history, when the acts of reading, thinking, and private communications are increasingly being mediated by computers. Human information allows control of human behavior by those who have the know-how to exploit it. And all of us can be nudged, influenced, manipulated, and exploited, regardless of how few dark secrets we might have.

The “nothing to hide” argument focuses narrowly on privacy as an individual matter rather than as a social value. It doesn’t recognize privacy as a right, yet it treats privacy as an individual preference rather than something of broad value to society in general. Framing privacy in this way makes it seem both weak and suspicious from the start.

With the possible exception of our thoughts, very little of our information is known solely to us. We are social creatures who are constantly sharing information about ourselves and others to build trust, seeking intimacy through selective sharing, occasionally gossiping, and always managing our privacy with others as we maintain our personal, social, and professional relationships. We also have an instrumental interest in letting other people have the privacy to live their lives as they see fit. Undeniably, this value is a cultural one; as law professor Robert Post has argued, privacy rules are a form of civility rules enforced by law.

Solove agrees that privacy’s value is social. “Society involves a great deal of friction,” he argues, “and we are constantly clashing with one another. Part of what makes a society a good place in which to live is the extent to which it allows people freedom from the intrusiveness of others. A society without privacy protection would be oppressive. When protecting individual rights, we as a society decide to hold back in order to receive the benefits of creating free zones for individuals to flourish.”

Free zones let us play — alone as well as with others — with our identity and are an essential shield to the development of political beliefs. They foster our dynamic personal and political selves, as well as the social processes of self-government. In this way, privacy is essential to the kinds of robust, healthy, self-governing, free societies that represent the best hope against tyranny and oppression. As Edward Snowden puts it succinctly, “Arguing that you don’t care about the right to privacy because you have nothing to hide is no different than saying you don’t care about free speech because you have nothing to say.”

Privacy isn’t about Creepiness

Using creepiness as our test for privacy problems creates a problem of its own. If something isn’t creepy, the insight suggests, then it probably isn’t something we should worry about. And it seems to follow from this that if people aren’t aware of a data practice, it’s fine.

Creepiness has three principal defects:

  • First, creepiness is overinclusive. Lots of new technologies that might at first appear viscerally creepy will turn out to be unproblematic.
  • Second, creepiness is underinclusive. New information practices that we don’t understand fully, or highly invasive practices of which we are unaware, may never seem creepy, but they can still menace values we care about.
  • Third, creepiness is both socially contingent and highly malleable. A pervasive threat to privacy or our civil liberties can be made less creepy as we become conditioned to it. Examples include:
    • The internet advertising industry, which relies on detailed surveillance of individual web-surfing.
    • The erosion of location privacy expectations on dating apps.
    • Facebook piggybacking on the greater willingness of people to share their lives and photos with people they actually knew. Their tricks were to:
      • use in-person social norms as bait for a much broader privacy heist.
      • keep pushing at the social and legal norms surrounding privacy, and changing its terms of service to allow it to access ever more personal information.

Privacy isn’t Primarily about Control

When Zuckerberg was called before Congress to testify about Facebook’s information practices during the Cambridge Analytica scandal, he argued over and over again that when it comes to privacy, Facebook’s goal, first and foremost, is to put its “users” in “control.”

Privacy as Control runs deep in our legal and cultural understandings of privacy. The basic approach of the Fair Information Practices is all about empowering people to make good, informed decisions about their data. The right to consent to new uses of our data and the right to access our data and correct it if it is wrong are examples of this approach, which has been common in U.S. privacy laws from the federal Privacy Act of 1974 to the California Consumer Privacy Act of 2020. Privacy as Control also runs through European data protection law and particularly through the GDPR, which enshrines strong norms of informed consent, access, correction, data portability, and other control-minded principles. American regulators have shared this view. As we have seen, for many years the Federal Trade Commission has called for a “notice and choice” regime to protect consumer privacy, even though the limitations of this approach became apparent over time.

Technology companies also lionize Privacy as Control. Google promises, “[Y]ou have choices regarding the information we collect and how it’s used” and offers a wide variety of “privacy controls.” In an online manifesto titled “A Privacy-Focused Vision for Social Networking,” Zuckerberg mused, “As I think about the future of the internet, I believe a privacy-focused communications platform will become even more important than today’s open platforms,” and he offered a few key principles that this new, allegedly privacy-protective Facebook would adhere to, the first of which was “Private interactions. People should have simple, intimate places where they have clear control over who can communicate with them and confidence that no one else can access what they share.”

Unfortunately, it’s not that simple. What can be stated simply, though, is that Privacy as Control has been a spectacular failure at protecting human privacy for the past thirty years, particularly in the United States. Privacy as Control is an illusion, though like the best illusions it is a highly appealing one. There are four main problems with using control to solve problems of privacy, but they are big ones: (1) control is overwhelming; (2) control is an illusion; (3) control completes the creepy trap; and (4) control is insufficient.

Control is overwhelming

Woodrow Hartzog states the problem well when he explains, “The problem with thinking of privacy as control is that if we are given our wish for more privacy, it means we are given so much control that we choke on it.” Mobile apps can ask users for more than two hundred permissions, and even the average app asks for about five.

When a company’s response to a privacy scandal is “more control,” this simply means more bewildering choices rather than fewer, which worsens the problem rather than making it better. As scholars across disciplines have documented extensively, our consent has been manufactured, so we just click “Agree.” All of this assumes that we even know wheat we’re agreeing to.

Long or short, privacy policies are vague and they are legion. One famous 2009 study estimated that if we were to quickly read the privacy policies of every website we encounter in a typical year, it would take seventy-six full working days of nothing but reading just to get through them all.

Control is an illusion

Early tech evangelists imagined the internet’s revolutionary potential to empower humans. What we got instead was one in which the interfaces governing privacy have been built by human engineers answering to human bosses working for companies with purposes other than revolutionary human empowerment (Silicon Valley’s advertising claims to the contrary notwithstanding). All of the rhetoric surrounding putting “users” in control belies the fact that engineers design their technologies to produce particular results. These design choices limit the range of options available to the humans using the technology. Companies decide the types of boxes we get to check and the switches we get to flip. They also decide which set of choices goes in the basic privacy dashboard, which set goes in the “advanced settings,” and, even more important, which choices “users” don’t get to make at all.

Facebook’s engineers not only know which options the average user is likely to select, but they can use the nudging effects of choice architecture to produce the outcomes they want and that serve their business interests. Or they use other dark patterns to discourage their customers from exercising their privacy controls.

Control completes the creepy trap

Communications scholar Alice Marwick’s idea of “privacy work” is particularly illuminating. Marwick argues that we all engage in “privacy work,” uncompensated labor that we must engage in or else be considered at fault.

And thus the creepy trap is completed. “Adtech” companies — advertising companies processing human information that is below our creepiness threshold with the purpose of selling targeted, surveillance-based ads to manipulate us into buying things.

The illusion of Privacy as Control masks the reality of control through the illusion of privacy. What is really being controlled is us.

Control is insufficient

Our privacy depends not just on what choices we make ourselves but on the choices of everyone else in society. Treating privacy as a purely individual value that can be given or bartered away (whether through “control” or sale) converts it into an asset that can be chipped away on an individual basis, distracting us from (and ignoring) the social benefits privacy provides.

Sometimes there is little or nothing we can do to prevent others from disclosing information about us. This can happen when companies set up pricing systems that rely on information disclosure, like “safe driver” discounts for car insurance contingent on your agreeing to have a black-box data controller in your car, especially if such boxes were to become standard in passenger cars. Or when your child’s school decides to use a “learning management system” or other software that has privacy practices only the school can agree to. Or when a company voluntarily discloses data it collected about you to the government. Or when someone discloses their genetic data to a company, which, since blood relatives have very high genetic similarities, means they have also shared sensitive information about their close family members.

Privacy isn’t Dying

The idea that Privacy Is Dying is a weaker but more insidious version of the Privacy Is Dead argument. A society fueled by data has no place for privacy, we hear, and we should let it fade into the past like horse-drawn carriages and VHS cassettes. Besides, the argument goes, people in general (and especially young people) don’t care about privacy anymore. A good example of the Privacy Is Dying argument was offered by a young Mark Zuckerberg in 2010, responding to an interview question about the future of privacy on Facebook and the internet in general: “We view it as our role in the system to constantly be innovating and be updating what our system is to reflect what the current social norms are.”

Just because more information is being collected, it does not mean that Privacy Is Dying. Zuboff explains that “in forty-six of the most prominent forty-eight surveys administered between 2008 and 2017, substantial majorities support measures for enhanced privacy.” Polls by the Pew Research Center, which does extensive nonpartisan work on public attitudes toward technology, have also found that Americans are increasingly concerned about online data collection, believe the risks of collection outweigh the benefits, and support withholding certain kinds of personal information from online search engines.

Moreover, the very institutions that have the most to gain from the acceptance of the Privacy Is Dying myth often go to great lengths to protect their own privacy:

  • The NSA, for example, keeps its surveillance activities hidden behind overlapping shields of operational, technical, and legal secrecy. It took Edward Snowden’s illegal whistleblowing to reveal the NSA’s secret court orders from the secretive FISA Court. These orders allowed the NSA access in bulk to the phone records of millions upon millions of Americans, without any evidence that international terrorists were involved.
  • Law enforcement agencies have access to “sneak and peek” search warrants that allow them to read emails stored on the cloud, often never giving notice to the people being spied on, and they are secretive about their use of drones and “stingrays,” devices that pretend to be cell phone towers that access digital information.
  • Technology companies closely guard their privacy with aggressive assertions of intellectual property rules, trade secrecy law, and the near-ubiquitous use of NDAs, nondisclosure agreements that prohibit employees, visitors, and even journalists from revealing discreditable things about a company.

Three Privacy Values

Privacy is best understood not so much as an end in itself but as something that can get us other things that are essential to good lives and good societies. Three such human values that I think privacy rules can and should advance: identity, freedom, and protection.

When we treat privacy as instrumental, the way we talk about privacy changes. We stop talking about creepiness, about whether we’re Luddites or about whether our friend’s privacy preferences are idiosyncratic. Instead, we start asking ourselves:

  1. what rules about human information best promote values we care about
  2. what the power consequences of those rules might be, and
  3. how we should use those rules to advance the values on the ground.

From this perspective, privacy becomes more neutral. This is important because privacy rules can promote bad things, too. And sometimes companies have a legitimate point that poorly crafted privacy rules can get in the way of economic activity without protecting anything particularly important.

1. Identity

Authenticity

Many technology companies are obsessed with “authenticity,” but perhaps none so much as Facebook. From its very beginnings, the social network has insisted upon a “Real Name Policy” under which its human customers are required to use their real names rather than pseudonyms.

While psychologists and philosophers use the term “authenticity” in a variety of ways, all of these ways have one thing in common: they produce benefits for the individual self. On the other hand, “authenticity” as used by social media companies often means the exact opposite:

  • “authenticity” is judged by the company rather than the individual (think presumptively “fake” Native American names or Salman versus Ahmed Rushdie).
  • this definition of “authenticity” primarily benefits the company by enabling better ad targeting to known humans.
  • it is “the community” over the individual — from Facebook’s perspective “authenticity creates a better environment for sharing.”

We deserve the right to determine for ourselves not just what we are called, but who we are and what we believe.

How Privacy nurtures Identities

At the most basic level, privacy matters because it enables us to determine and express our identities, by ourselves and with others, but ultimately — and essentially — on our own terms. Privacy offers a shield from observation by companies, governments, or “the community” within which our identities can develop. In other words, privacy gives us the breathing space we need to figure out who we are and what we believe as humans.

At the core of political freedom is intellectual freedom — the right and the ability to think and determine what we believe for ourselves. Intellectual and political freedom require the protective, nurturing shield of “intellectual privacy” — a zone of protection that guards our ability to make up our mind freely. More formally, intellectual privacy is the protection from surveillance or interference when we are engaged in the activities of thinking, reading, and communicating with confidants.

In order to generate new ideas, or to decide whether we agree with existing ideas, we need access to knowledge, to learn, and to think. The fight for communications privacy in letters, telegrams, and telephone calls was a long and hard one, and we must ensure that those hard-won protections are extended to emails, texts, and video chats. We also need private spaces — real and metaphorical — in which to do that work. Our identities may be individual, but the processes by which we develop them are undeniably social, and intellectual and other forms of privacy are needed throughout the processes of identity and belief formation, development, and revision.

Our political identities are critically important and have received special protection in the law for centuries. Privacy shields the whole self, enabling us to develop our whole personalities and identities, in ways that may be far removed from politics and society at large. Our identities, our senses of self, are complicated and gloriously, humanly messy. We human beings are complicated, we are constantly in flux, and our identities are affected by our environment and by our interactions with others. This self is neither rigidly determined by the mythic, autonomous core, nor is it rigidly determined by social forces; the self exists somewhere in between these two poles. This identity experimentation goes on throughout our lives. In other words, there is indeed a Me, but it is complicated, influenced, and shaped by all of the Yous out there. Yet even though You and other Yous influence Me, I can still resist, sometimes with success.

We are often members of groups that define ourselves in terms of what we are not. Groups engage in identity play in order to define who they are and who they are not (notice definitions once again at work in multiple directions, defining and restricting). As individuals, we can do the same thing — belong to groups, define ourselves in opposition to groups, or be part of a group and also separate from it.

Forcing

Identity forcing happens when our social or cultural environment defines us, forcing our identities into boxes we might not choose or may not even have drawn in the first place. While there was certainly room for identity play, the forcing effect of institutions can make play emotionally, psychologically, or socially costly. Facebook’s mandate that the identities of its users be tied to a public, “real” name is another example of forcing. As technology scholars Oliver Haimson and Anna Lauren Hoffmann have explained, “Experimentation with representing one’s identity online can also allow people to embody potential future selves, which can be indispensable to developing one’s identity broadly.”

Members of marginalized ethnic and national groups, immigrants, racial minorities, sexual minorities, and women, among others, are all familiar with the practice of code-switching, whereby we act, speak, perform, and dress differently for different audiences. In fact, everyone code-switches, even if we don’t realize it. This is not “inauthentic”; it is instead the expression of different aspects of my self, both of which are Me. Privacy matters because it allows us to hold multiple identities without their coming crashing together, giving the lie to Zuckerberg’s stingy and self-serving notion of unitary identity as authenticity.

We are all of us different people at different times each day as we play different roles in society, and we are different people at different times in our lives as we evolve, mature, regress, explore, and play with our identities. But this is not dishonest, nor is it inconsistent. It is human. Forcing human beings into compatibility with digital or corporate systems, shaving off our rough or “inauthentic” edges, has undeniable human costs.

If we do have a false, inauthentic self, I would argue that it is the homogenized, flattened, forced self that is the false one. This is the forced self that may be desirable to interfaces, advertisers, and self-appointed arbiters of our middle school social standing, but it is worrying if we care about individuality, eccentricity, and the development of unique, critical individuals and citizens.

Exposure

Digital tools expose us to others in ways that are normalizing, stultifying, and chilling to the personal and social ways we develop our senses of self. And exposure can be devastating to identity. Alan Westin expressed this idea well:

Each person is aware of the gap between what he wants to be and what he actually is, between what the world sees of him and what he knows to be his much more complex reality. In addition, there are aspects of himself that the individual does not fully understand but is slowly exploring and shaping as he develops. Every individual lives behind a mask in this manner; indeed the first etymological meaning of the word “person” was “mask.”

Hopefully you agree with me that an attractive vision of freedom is one in which we are able to work out for ourselves and with those close to us what we like, who we love, what we think is beautiful or cool, and what we think human flourishing looks like. This is not the freedom to be just like everyone else but rather a more radical notion that good lives and good societies are ones in which there is individuality, diversity, eccentricity, weirdness, and dissent. This is an argument for authenticity the way psychologists describe it, not the way Facebook does.

Privacy is the key to making this system work. The philosopher Timothy Macklem puts it well when he argues, “The isolating shield of privacy enables people to develop and exchange ideas, or to foster and share activities, that the presence or even awareness of other people might stifle. For better and for worse, then, privacy is sponsor and guardian to the creative and subversive.”

2. Freedom

The “Truth Bomb”

Edward Snowden disclosed many things about the scope of U.S. government surveillance in the summer of 2013, but one revelation that went under the radar was the National Security Agency’s pornography surveillance program. The NSA wanted to identify and surveil “radicalizers” — people who were not terrorists but merely radical critics of U.S. policy. The plan was to surveil them to find their vulnerabilities, then expose those vulnerabilities to discredit them publicly.

Alan Westin explained in 1967 that “the modern totalitarian state relies on secrecy for the regime, but high surveillance and disclosure for all other groups. With their demand for a complete commitment of loyalty to the regime, the literature of both fascism and communism traditionally attacks the idea of privacy as ‘immoral,’ ‘antisocial,’ and ‘part of the cult of individualism.’ ”

We have made some progress against the specter of unregulated government surveillance, most notably with the passage of the 2015 USA Freedom Act, but we still lack a clear understanding of why surveillance can be harmful and how it can threaten political freedom. U.S. courts hearing challenges to government surveillance programs too frequently fail to understand the harms of surveillance and too frequently fail to allow legitimate claims to go forward.

Understanding surveillance

Like privacy, surveillance is a complex subject, and like privacy, it is neither always good nor always bad. But a better understanding of what surveillance is and how it works can help us to understand how to protect privacy when it matters. Sociologist David Lyon explains that surveillance is primarily about power, but it is also about personhood. Lyon also offers a helpful definition of surveillance as “the focused, systematic and routine attention to personal details for purposes of influence, management, protection or direction.”

I’d like to add a fifth, which is that surveillance transcends the public-private divide. This is a real problem under U.S. law because constitutional protections like the Fourth Amendment typically apply only against government surveillance. Private actors can undermine civil liberties free from constitutional constraint. For example, facial recognition companies like Clearview AI scrape millions of photos from the internet, combine them with government photo ID databases or Facebook profiles (both of which associate photos with real names), and then sell facial recognition products to the government.

In modern democratic societies, surveillance of all kinds is on the rise. Location tracking from smartphones and smart cars, facial recognition technology, or AI and “big data” algorithms fueled by human information from a variety of sources and used for an ever greater variety of purposes.

“Liquid surveillance” is the spread of surveillance beyond government spying to a sometimes private surveillance in which surveillance subjects increasingly consent and participate. But here, too, consent can become an illusion. We cannot consent to secret surveillance, nor can we consent to structural surveillance like ubiquitous CCTV cameras installed by businesses or on public transport.

Then there is the problem of the “unraveling” of privacy described by legal scholar Scott Peppet, which occurs when other people opt in to surveillance, leaving the vulnerable exposed and isolated. Progressive Insurance’s “MyRate” program, in which drivers can receive reduced insurance rates in exchange for the installation of a surveillance device that monitors driving speed, time, and habits. Drivers who don’t participate in this surveillance program not only pay more for their insurance, but their “privacy may unravel as those who refuse to disclose are assumed to be withholding negative information and therefore stigmatized and penalized.”

Intellectual Privacy and Political Freedom

In a democracy there is also a special relationship between intellectual privacy and political freedom. Intellectual privacy theory suggests that new ideas often develop best away from the intense scrutiny of public exposure. Protection from surveillance and interference — is necessary to promote this kind of intellectual freedom. It rests on the idea that free minds are the foundation of a free society, and that surveillance of the activities of belief formation and idea generation can affect those activities profoundly and for the worse.This requires at a minimum protecting the ability to think and read as well as the social practice of private conversations with confidants. It may also require some protection of broader social rights, whether we call them rights of association or rights of assembly, since our beliefs as well as our identities are socially constructed. It reflects the conviction that in a free society, big ideas like truth, value, and culture should be generated organically from the bottom up rather than dictatorially from the top down.

Surveillance and Power

The struggles over personal information that we have lumped under the rubric of “privacy” are the struggles that are defining the allocation of political, economic, and social power in our new technological age. Even in democratic societies, the blackmail threat of surveillance is real. Surveillance (especially secret surveillance) often detects crimes or embarrassing activity beyond or unrelated to its original purposes. Whether these discoveries are important, incidental, or irrelevant, all of them give greater power to the watcher.

Looking forward, it does not take much paranoia to imagine a spy agency in a democracy interfering in an election by blackmail (through the threat of disclosure) or even disclosure itself. The power effects of surveillance that enable blackmail are likely to become only greater as new digital tools for processing human information are developed and deployed.

Digital surveillance technologies have reached the point when their powers of persuasion threaten self-government itself. A system of unregulated micro-targeting mean elections might be determined by data science rather than democratic practices under which swing voters are subjected to the same processes by which “consent” to data practices are manufactured by companies. Indeed, the South Korean National Intelligence Service admitted it had conducted a secret campaign in its own country to ensure its preferred candidate won.

The power of sorting can bleed imperceptibly into the power of discrimination. Governments are increasingly using tools like these that enable invidious sorting (eg criminal risks). Even where the data is of high quality and doesn’t reflect racism or inequality, algorithms are still not neutral. As data scientist Cathy O’Neil, the author of Weapons of Math Destruction, explains “Algorithms are, in part, our opinions embedded in code”. “They reflect human biases and prejudices that lead to machine leading mistakes and misinterpretations.” Marginalised groups frequently lack a meaningful ability to claim the right of privacy.

The right to claim privacy is the right to to be a citizen entrusted with the power to keep secrets from the government, the right to exercise the freedom so lauded in orthodox American political theory and popular discourse. It is a claim to equal citizenship, with all the responsibility that comes with such a claim, and an essential and fundamental right in any meaningful account of what it means to be free. In order to achieve this promise in practice, privacy protections must be extended as a shield against inequality of treatment.

3. Protection

A digital society encourages consumers to depend ever more upon information products that operate in a virtual black box. Because privacy rules are information rules, this is in effect a privacy problem, in a broad sense. Privacy rules will be necessary for the next stage of consumer protection laws and will increasingly need to merge.

Unlike European countries that typically have both a consumer protection agency and a new data protection agency, the United States has only the Federal Trade Commission at the national level preventing “unfair and deceptive trade practices.” We have been developing laws and vocabulary to talk about the problem of government power since at least the ancient Greek philosophers, and in Anglo-American law since at the least the Magna Carta of 1215. Justice Oliver Wendell Holmes’ famous metaphor of the “marketplace of ideas” in Abrams v. United States (1919), in which he argued that we should protect the expression of “opinions that we loathe and believe to be fraught with death” because they might turn out to be correct. But our cultural understandings of private power are far less mature. American law places far fewer constraints on private actors than on government actors.

How should we think about Consumer Protection?

Many tech companies talk about consumer privacy using three distinct terms:

  1. users (are often the product itself who aren’t compensated for or given ownership of their work)
  2. choices (are unwitting, coerced, involuntary or some combination of the three)
  3. innovation (is vague, cast only as good, disappears when regulation appears, and is sometimes pushed as a fundamental right)

Innovation as a fundamental right is particularly ironic where fears of “stifling innovation” are commonly and frequently used to resist any attempts to protect privacy, an actual fundamental right protected not just by European law but by American law as well.

Modern life and the incentives placed on human beings living in late capitalist, modern, networked societies have overleveraged the time and energy of most consumers, sometimes up to and beyond the breaking point, and we shouldn’t pretend otherwise. Our consumer protection law should recognize the situation consumers are actually in and not treat them as if they have limitless resources of time and money, strong bargaining power and access to sophisticated lawyers. It should recognize their position as exposed, tracked, sorted, marketed, and at risk of manipulation.

Consumer privacy law for the information economy must ultimately become consumer protection law and four initial strategies seem the most promising — two new and two old. The two old strategies would be to reinvigorate protections against deception and unfairness, while the two new ones would be to protect against abusive practices and consider regulating services marketed as “free” when they are really not.

Building Digital Trust through Privacy Rules

Trust is everywhere, even when it’s not obvious, and crucially it’s at the core of the information relationships that have come to characterize our modern, digital lives. So much of our lives are mediated by information relationships, in which professionals, private institutions, and the government hold information about us as part of providing a service. But privacy is often thought about in negative terms, which leads us to mistakenly focus on harms from invasions of privacy, and places too much weight on the ability of individuals to opt out of harmful or offensive data practices. Privacy can also be used to create great things, like trust.

Information relationships are ones in which information is shared with the expectation it will not be abused and in which the rules governing the information sharing create value and deepen those relationships over time. If privacy is increasingly about these information relationships, it is also increasingly about the trust that is necessary to thrive, whether those relationships are with other humans, governments or corporations. This vision of privacy creates value for all parties to an information transaction and enables the kinds of sustainable information relationships on which our digital economy must depend.

If we want to build trust in our technologies that run on personal information, four factors will be essential for trustworthy institutions. They’re:

  1. discreet about the human information they hold — we should be able to presume our data will not be disclosed or sold without our knowledge.
  2. honest about their data practices — requires humans whose data is processed to understand what’s going on.
  3. protective of the human information they hold — secure human data against breaches and minimise the damage caused when there are failures.
  4. loyal — act in our best interests and use data only to the extent it doesn’t endanger or negatively affect us.

With thanks

I’d like to give Neil Richards my eternal thanks for illuminating what was really at stake in the fight for digital safety and privacy so adeptly for me.

Follow us
< Articles

The history of US credit surveillance and it’s impact on privacy

I recently read a fantastic book by Josh Lauer titled Creditworthy: A History of Consumer Surveillance and Financial Identity in America. It’s easy to assume that privacy standards started to be whittled away with the broad take-up of the internet and social media. The reality is starkly different.

May 25, 2023 • 18 mins

Author: Angus Mackay

Commercial Credit (1840 – 1890)

Prior to the introduction of any formal commercial credit system across the US, credit was extended primarily on the basis of character, while capacity and capital made up the “three Cs”. Every moral quantification of a merchant increased his credit: reputation for honesty, work ethic, perseverance, industry expertise, marital status, religion etc. This trust was a function of familiarity, creditworthiness and reputation. This was a stark contrast to the system of credit in Europe which hinged on collateral, land and inherited fortunes.

The speed of the capitalist transformation of America in the early nineteenth, the shortages of circulating currency, and the increasingly far-flung and complex credit relationships that supported it meant the liberality of the American credit system was a practical necessity.

By the 1830’s, as migration bought growing numbers to the cities and populations spread inland, traditional forms of credit assessment were breaking down. This was a major problem for city merchants, especially importers, manufacturers, wholesalers, and other middlemen who sold to country retailers and tradespeople each spring and fall. With so much business at stake there was considerable pressure to trust people of unknown and unverifiable reputation.

The panic of 1837 underscored the risks. A cascade of defaulting debts wiped out investments, wrecked businesses and crippled the American economy. Lewis Tappan who ran a silk wholesaling firm in New York was bankrupted by uncollectable debts. He also saw an opportunity to transform the credit system and launched the Mercantile Agency in 1841 with a goal of compiling detailed information about business owners across the country. The Mercantile Agency quickly became synonymous with commercial credit and served as model for future competitors.

Tappan’s innovation was to transform the business community’s collective knowledge into a centralised, subscription-based reporting service. Key to the agency’s success was the use of unpaid correspondents instead of expensive lone travelling reporters. Most members of this vast network were attorneys who filed reports in exchange for referrals to prosecute debt collections in their communities. The agency had 300 correspondents by 1844, 700 by 1846 and more than 10,000 by the early 1870s across nearly 30 offices, including several in Canada and one in London.

While usual correspondence between reporters, branches and the main office was completed by mail to limit costs, news of “serious embarrassments, assignments and failures” were immediately telegraphed. Until coded reference books appeared in the late 1850s, subscribers – wholesalers, merchants, banks, and insurance companies – received credit information only in the offices of the Mercantile Agency. Seeking a competitive edge, at least one major wholesaler strung its own telegraph line to one of the mercantile agencies, establishing the first system of real-time credit authorisation.

By the late nineteenth century specialised reporting firms were also formed to serve a variety of industries that included manufacturers of iron and steel, jewellery, furniture, shoe and leather, and construction materials. A Chicago journalist in 1896 wrote about “Barry’s Book” which listed 35,000 retail lumbermen and 2,000 wholesale dealers, “it’s so thorough and comprehensive that a retail lumberman out in a Dakota village can’t stand the milkman off for half a dollar’s worth of tickets without every wholesale lumberman in the country being appraised of the fact before night”.

There were major problems with these early systems though. The quality of the intelligence was often an issue when much amounted to little more than hearsay, could be corrupted by young lawyers seeking to gain favour with the mercantile agency to win work, and often went back more than 18 years. Converting qualitative reports into quantitative fact could be difficult as comments were open to interpretation and lacked context. While it was easy to identify the extremes of the business community it was the vast middle range that proved troublesome. Importantly, if you were subject to less than favourable character reports there was very little you could do about it. There were no laws to protect you, no way to view or appeal your own report, and libel suits at the time were lengthy endeavours that were all ultimately decided in favour of the agencies. The introduction of full-time credit reporters in the 1860s went some way to solving these issues. Efforts to compel business owners to share signed financial statements were resisted or ignored well into the 1890s.

Tappan sold out of the agency to several associates in 1854 and the firm was renamed R. G. Dun and Company. In 1868 Dun’s chief rival was the Bradstreet Company. The two companies competed aggressively until 1933 when they merged to become Dun & Bradstreet (now Illion).

The credit bureaus and reporting firms together formed an elaborate system of mass surveillance. No government agency in America had anything like the intelligence gathering reach of the mercantile agencies at the time. To compound this further, the information wasn’t being collected for internal administration but to sell to anyone that had a need to use it.

Consumer Credit (1890 – 1950)

When the norms of commercial credit were transferred to consumer credit during the late nineteenth century, the cocoon of privacy surrounding personal finance was similarly punctured. Retail credit reporting was almost entirely based upon an assessment of the borrower’s character and financial behaviour. A lot of these firms didn’t last for long though as the costs of collecting information was high and a gargantuan effort was involved to update reference books to ensure their accuracy. They also couldn’t control the books being shared and were reliant on small businesses whose record keeping was often very poor.

The financial crisis of 1893 provided the impetus for professionalised credit management which in turn had an enormous influence on the institutionalisation of consumer credit surveillance. Large retailers introduced credit managers who codified principles and standards for supervising customer accounts. They increasingly leveraged telephones for credit checks and the telautograph (early fax machine) and teletext technologies that came after to improve the speed, accuracy and efficiency of credit departments and credit bureaus. The National Association of Credit Men was founded in 1896 but never numbered more than 200 members primarily because of the natural distrust amongst competitive retailers. It did spawn another grouping organised around the profession, The Retail Credit Men’s National Association, that grew to more than 10,000 members across 40 states and 230 cities by 1921. By the mid 1920s there were credit files on tens of millions of Americans.

As judge and jury in lending decisions, credit managers wielded enormous power over the lives and fortunes of consumers as they balanced the expectations of salesman with keeping losses at or below 1 percent of sales. Credit professionals were expected to have a grasp of all spheres of activity with even the slightest bearing on the future financial conditions of his customers, from local weather conditions to national monetary policy. Fear was the primary motivator but they also had to keep the customer’s goodwill and even make themselves their confidant and advisor, so they would voluntarily keep them informed of their “condition”.

A retail credit department wouldn’t extend credit until the customer had an interview with their credit manager. These interviews became a referendum on a customer’s morality and social standing. A national association’s educational director observed “I know of no single thing, save the churches, which has so splendid a moral influence in the community as does a properly organised and effectively operated credit bureau”. To be refused credit implied they were undeserving, deficient or suspect. When credit was granted, a credit limit was assigned without the customer’s knowledge and could be adjusted over time to control credit risk and reduce the need for constant credit approvals.

Unlike highly secretive 21st century credit reporting, early credit bureaus and associations went to great lengths to publicise their work. The message of credit morality, delivered via mass media and in countless private consultations throughout the country, equated credit and character in explicit terms. Credit managers used World War 1 to join their message of credit responsibility and national allegiance to introduce thirty-day billing cycles. “Prompt Pay” and “Pay Up” campaigns were run in cities and towns across America to amplify these messages. The messaging over the next decade then shifted to one of “credit consciousness”.

In addition to their regular customer credit files, many credit departments and bureaus kept “watchdog” cabinets. These files included compromising information about divorces, bankruptcies, lawsuits, and accounts of irresponsible behaviour sourced from newspapers, court records and public notices. This information might not have an immediate impact on a customer’s credit standing but it could be used against them in the future.

By the 1920s, credit managers were no longer simply tracking customers and making authorisations, they were also mining their rich repositories of customer data for the purpose of targeted sales promotions. In this way systematic credit management began to develop into an instrument of social classification and control with broader implications.

The fact that credit reporting up until this point had generated so little public reaction was surprising. Occupation, with its connection to income both in terms of amount and regularity biased salaried professionals with an established credit history and good references. Geography and buying patterns were other metrics used to make generalisations and race and ethnic prejudice were codified as standard operating metrics to discriminate amongst applicants. All these early efforts were blunt punitive tools used to identify the best and worst cohorts in the community. They also created, supported and sustained social prejudice in many of its forms across America.

By the 1930s, credit managers began to realise the value of “customer control”. Their most profitable credit risks were already on their books, and it was far more difficult and expensive to acquire new customers. Customer control developed in credit departments rather than marketing departments because they controlled all the insights. Personalised letter campaigns enjoyed high conversion rates and were just as effective to reengage inactive customers as they were to capture a higher share of wallet from their best customers across more departments. As customer control became more sophisticated, retailers attempted to further differentiate their credit customers by price line, taste and average expenditures to regain the personal touch that had been lost in the development of impersonal mass retailing.

Despite the prevalence of a national credit reporting system, many retailers in the 1940s continued to grant credit on the basis of an honest face or a personal reference. Many retailers didn’t have their own credit departments or full-time credit managers. Barron’s reported in 1938 that only 10% of small business owners employed a credit manager, though 91% offered credit terms to their customers. By the start of World War 2 credit bureaus had files on 60 million people.

Spy’s and Social Warriors

Credit bureaus were essential to the banks and oil companies launching aggressive credit card programs in the 1950s that meant the credit departments of the large retailers no longer had a monopoly on the insights of their customer’s buying habits.

By the early twentieth century two basic types of credit reports dominated. Trade clearances were the least expensive and most time sensitive reports and included only basic identifying information like name, spouse’s name, current address, and ledger data consisting of a list of open accounts, the maximum balances and the customer’s history of repaying. Antecedent reports were comprehensive summaries of an individual’s personal, financial and employment history. Typical information included race, age, marital status, number of dependents, current and former employees and positions held, salary and other sources of income, bank account information, any bankruptcy or legal actions and whether they owned or rented. They also included terse commentary on home life, personality, reputation etc.

There were also two styles used for data collection. The Retail Credit Company (RCC) was a national reporting agency who employed local investigators, primarily in the insurance business and would later in 1976 be renamed Equifax. Their approach relied on a direct investigation via interviews rather than shared ledger data and extended to the family members, former employers, references, club members, neighbours and former neighbours, financial and professional people. By 1968 the RCC had more than 30 branches and employed more than 6,300 “inspectors”.

In contrast the Association of Credit Bureaus of America (ACB of A) was a network of independent affiliated members and relied on customer account information and payment histories submitted to the bureaus by local creditors. They also conducted direct investigations to flesh out new or old files and spoke to employers, banks and landlords. Landlords were especially prized because they often included comments on personal habits and manners as well as rent payment records, number of dependents and history at the address. They would also speak with neighbours, the local grocer or drug store to fill in any gaps.

Bureau investigators were advised to cultivate trusted informants, to meet privately to ensure confidentiality and to conceal notebooks or pre-printed forms that might arouse suspicion. They also created derogatory reports that included any information that could be used to discredit an applicant from press clippings, public records or member feedback. These reports were used to fill the gaps in real time information and were cheaper than ordering a full report. They included news clippings on accidents and calamities and even stillborn or premature births on the logic that such events usually involve new debts to doctors, hospitals and morticians. Bureau clerks would also descend on courthouses and municipal offices daily to copy all records pertaining to real estate, civil and criminal lawsuits, tax assessments and liens, bankruptcies, indictments and arrests. This air of espionage would come under harsh criticism during the late 1960s.

Many bureaus systematically repackaged and resold their information to third parties – insurance companies, employers, car dealers, landlords and law enforcement. Employer reports could be sold for more than three times the price of a credit report. Some bureaus began branching into new promotional services, including pre-screening programs and the sale of customer lists.

Many post war bureaus also ran promotional business by offering “newcomer” services based on the Welcome Wagon business promotion model thinly disguised as a charitable civic association. Hostesses would greet a new family when they moved into town to offer them helpful information on schools, shopping, churches and other community amenities. The primary purpose was to convince them to sign up for credit services in their local town. With either a completed application or a former address in hand, investigations could be made to determine whether complimentary charge accounts would be offered. In some places these services were integrated into local credit card programs often run by the credit bureaus.

Dallas’ Chilton bureau was unique in its aggressive effort to marry credit surveillance with credit promotion. It issued its own credit card in 1958 and sold pre-screened lists of top-rated customers that could be sorted by mailing zones, then by income, age, profession, renter or property owner. This type of targeted marketing and consumer analytics was far ahead of its time and is now a core service offered by all three major credit bureaus. The Dallas bureau’s rapid expansion in the 1960s with the purchase of more than 40 bureaus and the computerisation of its credit reporting network and list marketing business, eventually led to the creation of Experian.

Both credit reporting agencies and credit managers were motivated to maintain confidentiality. It protected their competitive positions and helped them to avoid libel suits. Applicants would never be given a specific reason why they were rejected, and only told their credit report was incomplete or insufficient. They were also never able to see their own credit record, and in any interview a credit manager had with those who wanted an explanation, it was used as another opportunity to enhance the applicant’s credit file.

However, in their perpetual quest to vanquish deadbeats and to safeguard the nation’s credit economy, bureau operators saw themselves as patriotic agents of social change. Bureau files were completely opened to government officials in the spirit of public service. This started shortly after World War 1 and turned into a paid relationship in 1937 when the US Department of Justice contracted with the National Consumer Credit Reporting Corporation for the FBI. A symbiotic relationship developed because bureaus were dependent on government agencies for access to courthouses and municipal offices and public agencies relied on detailed investigative reports to run loan programs like those of the Federal Housing Authority and Veteran Affairs. Non-credit granting agencies, like the FBI and IRS, can still get access to bureau files today with a court order.

The industry faced lawmakers and popular condemnation in the late 1960s. Surprisingly, the appropriateness of fundamental metrics of credit reporting weren’t bought into question. It was the lack of rules around the collection, sharing and correction of credit information that was the focus. The Fair Reporting Credit Act (FRCA) of 1970 and the Consumer Protection Act of 1968 placed restrictions on these activities but the underlying logic that a credit record was relevant to employment and insurance risks didn’t change. These included:

  • Credit reports could only be purchased for a “legitimate business need”.
  • Bureaus had to disclose the “nature and substance” of an individual’s file as well as the sources if requested by the customer.
  • Adverse items were to be deleted after 7 years (14 years for bankruptcies).
  • If you were denied credit, insurance or employment, or your insurance premiums were raised on the basis of an adverse credit report, you had to notified and given the contact information of the credit bureau.
  • Individuals for whom an investigative report was ordered had to be notified.
  • For employment reports, applicants had to be notified if the report included an adverse public record.

Serious problems quickly emerged around the interpretation of the FRCA laws that left a lot open to the discretion and abuse of credit bureaus. The laws did even less to protect privacy because government agencies could still obtain basic information without a court order and the content of the bureau files was unrestricted. Since credit bureau violations were very difficult to prove, and long-standing privileged communication laws protected them from defamation, they had little fear of being sued and therefore no incentive to rein in their information collection practices.

Relevance was another problem for credit evaluation because it wasn’t clear where financial information ended and non-financial information began, and they informed each other. This was only partially addressed by the Equal Credit Opportunity Act (ECOA) in 1974 that made it illegal to deny credit on the basis of gender or marital status. Race, religion and age could still be used to discriminate against applicants even though many lenders had already removed these from their scoring models. This was complicated by an ECOA requirement to give each declined credit application a specific reason for their rejection. Lenders argued it was impossible, but they were more concerned about defending some of the metrics they were relying on. If more metrics were excluded from their models it could have an impact on their profits. Even when gender or marital status was removed from models there were so many “secondary” variables that offered reliable proxies for these variables.

Lenders were relying on information collected from the credit application alone until the late 1980s primarily because it reduced their spending with the credit bureaus, despite providing an opportunity to reduce some of the discriminatory nature of credit decisions. The practice of not including an applicant’s past payment history in a credit decision came in for heavy criticism which eventually led to their inclusion.

Computerisation (1950s & 60s)

The automation of banking and accounting during the late 1950s caused many to predict that a paperless office was on the horizon and a single identification card would replace checks and cash and link a person’s bank, credit and personal information. Although a checkless society was technically possible, the problem of personal identification was insurmountable. Mistaken identity and fraud had already become a major problem after World War 2. It also required a new high speed surveillance infrastructure to identify and monitor the nation’s population.

To combat these fraud issues, verification services started to proliferate:

  • Validator was adopted by Macys in 1966 and allowed sales clerks to authorise credit purchases by entering a customer’s account number, into an electronic desktop device.
  • Telecredit was used by Carson, Pirie, Scott and was a combination touch tone phone and audio response unit connected to a computer.
  • Veri-Check was created by the Dallas Chilton bureau and combined a database of names, physical descriptions and up to date check cashing histories for local individuals.

The first mover in the computerisation of credit records was Credit Data Corporation (CDC), a private firm based in California. It was led by Harry Jordan, a graduate in biophysics that returned home to run the family business, The Michigan Merchants Credit Association, after his father’s death in 1956. Rather than working to gain the cooperation of local bureaus across the country he focused on the wealth and mobility of three regionals zones that accounted for 43% of the US population: LA to San Francisco, Boston to Washington and Chicago to Buffalo.

CDC digitised it’s LA bureau at a cost of $3 million in 1965 using IBM 1401 computers leased at a cost of $20,000 per month. The bureau provided reports in 90 seconds at a cost of 63 cents per enquiry (33 cents if no record could be produced). The next year the cost of an enquiry fell to 22 cents. By 1967 CDC boasted state-wide coverage with more than 11 million records. The first step to achieving their goal to permit rapid access to credit information from anywhere in the US occurred in 1969 when the New York office could remotely access credit data from LA.

While centralised reporting spelled the end for the local credit bureau, it was widely assumed in the mid-1960s that banks, retailers and credit card companies would be better positioned to dominate credit reporting. Retailers gradually joined banks as heavy computer users but were not at the forefront of the adoption of new technologies. Banks were far ahead of all other major stakeholders in the market for consumer credit information and had already revolutionised check clearing. What held bankers back though was their conservatism and a long history of strict client confidentiality. In contrast, credit bureaus’ only privacy concerns came from not sharing their best customers. From the beginning many banks declined to participate in local reporting ventures and even when they did they shared only basic identifying information and ballpark figures. Instead, banks relied on their own in-house records and intrabank relationships for customer information.

The problem for the banks at the time though was the sheer volume of consumer credit that their customers were accumulating across charge accounts and credit cards. It was difficult to know a loan applicant’s true financial position without engaging with a credit bureau. In 1965, ACB of A members received about 10% of their income from banks. The mysterious ascent of CDC can be explained by its main subscribers: banks as well as credit card issuing oil companies and national credit card issuers like American Express and Bank of America. CDCs decision to exclude any information about a person’s race, religion, psychological profile or personality, medical records or gossip culled from neighbours was a crucial point of differentiation. The banking industry’s defence of consumer privacy was quite forward looking. This unfortunately didn’t prevent a race to bring American consumers under an increasingly totalising system of private sector surveillance.

There was a furious response across the credit bureau industry as they could see the potential to be cut out of the credit reporting loop altogether. In 1966, a new consortium, Computer Reporting Systems Inc., was formed in LA to help Southern California bureaus compete with CDC and a few years later was computerizing and linking more than 40 bureaus in Arizona, Nevada and Southern California. By 1967, Credit Bureaus Inc. was computerising records across 40 bureaus in Oregon, California, Washington and Idaho. ACB of A’s challenge was that computerisation meant centralisation. Regardless, the Dallas Chilton bureau had IBM 360 computers set up in 1966. The Dallas bureau alone reduced 1 million records stored in 300m2 of office space to a data cell the size of an office waste basket.

Computerisation importantly led to a new alphanumeric “common language” to describe creditworthiness that would address the inconsistencies of historical credit reporting. The most important of these was COBOL (Common Business Orientated Language) that used a reduced character set and naturalistic English language commands. There was still a common problem to be solved though: how to identify an individual. The use of the Social Security number by federal agencies was already expanding in the 1960s and no rules prevented its use outside of government. It’s adoption more broadly was legitimised by the American Bankers Association when in 1968 it recommended it be used for a nationwide personal identification system.

Credit Scoring

Statistical credit scoring developed in the late 1950s as business consultants and researchers began using computers to create sophisticated scoring systems for major banks and retailers. Computer-assisted credit scoring caused a fundamental shift in the concept and language of creditworthiness, even more so than computerised reporting. Interaction between credit managers and applicants ceased and risk management shifted from a focus on the individual to being a function of abstract statistical risk.

Fair, Isaac and Company succeeded in bringing credit scoring into mainstream commercial practice using the same discriminant analysis used by David Durand in his 1941 study on consumer finance. Founded by two former analysts from the Stanford Research Institute, William Fair was an electrical engineer and Earl Isaac a mathematician. They received a break when one of the nation’s leading consumer finance companies, American Investment Company (AIC) hired them to analyse their credit files.

General Electric Credit Corporation (GECC) had invested $125 million developing credit scoring by 1965 and by 1968, a third of the top 200 banks were using credit scoring and another third considering it. The promise of credit scoring though took time to develop as the determinants of creditworthiness needed to be identified. Credit applications of 40 or more questions became fishing expeditions for worthy metrics. A second problem faced by early adopters was maintaining the validity of scoring systems across different populations. No two businesses were the same because no two businesses dealt with exactly the same borrowers. Even small behavioural or socioeconomic differences could skew results at the risk of disastrous miscalculations. Over time, small changes in a statistical pool of customers caused a model to lose its predictive power. Continual resampling was necessary to adjust the models.

By converting credit decisions into uniform, quantitative rules, scoring systems also allowed managers to quickly generate reports that displayed patterns of profit or loss and the risk of its various portfolios. The comprehensive managerial information system created from credit scoring almost rivalled its main function in importance. While risk-based interest rate pricing was explored and was standard practice in the insurance industry, the real power of credit scoring came from its ability to allow creditors to push credit risk to the furthest limit at the lowest margin and the development of “subprime” lending.

The CDC was acquired by TRW in 1968, a defence industry giant that engineered nuclear missiles and satellites, because they saw an opportunity to acquire a big proprietary database and a dominant position. Over the next two decades, TRW and other large computerised bureaus would develop a dizzying array of credit screening and marketing programs. The FCRA, with it modest requirements and loose definitions, did nothing to address the larger forces that were shaping the future of consumer surveillance.

The rise of data brokers (1970s – 1990s)

During the 1970s databases proliferated and became integral to everyday life in America. By 1974 the government was operating more than 800 databases with more than a 1 billion records on their citizens. Without a legal framework to limit data sharing in any way, consumer information became a gold rush commodity that circulated rapidly between a dizzying array of commercial interests. RCCs name change in 1976 to Equifax signalled this shift to very big consumer data. Each new venture of the big 3 data brokers (incl. TransUnion and Experian) fed data back to their parent entities that boosted customer insights. By 1980, 70% of all consumer credit reports were provided by 5 bureaus. By the end of the 1980s, Equifax earned more 10-20% of its revenue from pre-screening programs designed to convince merchants to accept them and customers to use them.

The integration of credit bureau data into credit scoring models in the mid-1980s allowed the take-up of generic credit models because risk models based on this data were generalisable. Prior to this each bank, retailer or business had to build its own credit scoring models at a cost of $50,000 – $100,000 that then needed to be constantly resampled. In 1989, Fair Isaac introduced a new credit model that would become the industry standard that translated risk rankings into a FICO score between 300 and 900, with higher numbers representing the least risk of default. The adoption of these models by the large listed mortgage lenders, Freddie Mac and Fannie Mae, in 1995 institutionalised the approach.

As credit risk became passe the new focus became customer profitability. It was no longer enough to clear the credit hurdles, you had to clear the “lifetime value” hurdle as well. By the late 1990s all 3 major data brokers had their own risk modelling units and there were literally dozens of generic models that predicted risk and profitability. Delinquency alert models allowed lenders to continuously track the risk and performance of an individual across all their accounts. New scoring models were developed for utility companies, telephone carriers, car dealers, property insurers and healthcare providers.

Credit bureaus finally turned to consumers themselves to sell services that helped them monitor their credit reputations. TRW (Experian) was the first in 1986 with a $35 per year subscription that offered access to your own credit reports, automatic notifications when your credit records were queried, and a special service for cancelling lost or stolen credit cards. The program was heavily advertised and quickly enrolled 250,000 subscribers in California alone. Critics of the program noted you could already buy a copy of your credit report for a nominal fee, and bureaus were simple shifting their costs to consumers, because its necessity was driven by their own inability to protect their data. They also used the service to gather yet more information from subscribers they could then repackage and sell.

The release of the Lotus Marketplace CD that sold for $700 in 1990 and contained profiles for 120 million adults and 80 million households that combined census data, postal service data, consumer surveys and Equifax’s own credit files, led to the first example of a privacy protest in America. Equifax decided to scuttle the product because of the uproar it caused but didn’t understand what all the fuss was about when all of this information on consumers was readily available. Credit bureaus had to abandon their marketing list businesses in the 1990s as they were slapped with multiple government lawsuits, but a key concession allowed them to continue to sell personal information (name, phone, zip code, birth date, social security number etc.). By then this kind of information was even being sold by the banks. The boundaries between credit bureaus and financial institutions had collapsed. In 1999, the CEO of Sun Microsystems, Scott McNealy, summed it up bluntly: “You have zero privacy anyway – get over it”.

Closing Thoughts

If the issue of privacy has been such a low priority for so long in the US then it does make you wonder why it took until 2016 for the European Union to create it’s General Data Protection Regulations. The 3 dominant data brokers have been exploiting a lack of coordination and development on global privacy laws for decades.

Australia has a great opportunity after the recent review of our privacy laws to make amends for the slow pace of development of our privacy rights. It will also present a huge commercial opportunity for the forward thinking amongst us if we do.

Follow us
< Articles

Gnosis and the future of web3

In a recent trip to Berlin, I was very excited and extremely appreciative to have the opportunity to meet with the COO of Gnosis, Friederike Ernst. The Gnosis team have led so many movements in web3 so it’s great to hear her thoughts on how they see the future of web3.

Aug 31, 2022 • 16 mins

Author: Angus Mackay

Angus: My first question is about Futarchy and prediction markets. Prediction markets are easily explored where there are objective questions to be answered. Where the questions posed are more subjective it’s hard to boil an answer down to a single metric. Does Gnosis believe there are very broad applications for prediction markets?

Friederike: So, at the core, that’s an Oracle problem. How do you ascertain what is true and what is not, and often that’s a difficult question. It’s actually one question that we’ve never tried to solve. We’ve mostly always used trustless Oracle providers where anyone can take a view on how something will turn out, and if other people don’t agree, there’s an escalation game. This process continues and attracts more money so a collective truth can be found. The idea is that you basically don’t need the escalation game, but just the fact that it’s there, keeps people honest.

Angus: Other than the sports and political use cases that you usually associate with prediction markets, what do you see are the opportunities to expand their application?

Friederike: Our core business model has pivoted away from prediction markets. I actually do think there’s a large arena in which we’ll see prediction markets happen and play out in the future. I just think there’s a paradigm shift that needs to happen first.

As prediction markets work today, you use a fiat currency as collateral (Euros, USD, AUD etc.) and you have a question. If the question turns out to be true then the people holding the YES token win the collective stake of those people who hold the NO token. If the question turns out to be false then the people holding the NO token win the collective stake of those people who hold the YES token.

Typically prediction markets run over quite some time. An example would be will Elon Mush actually buy Twitter? Lets say this plays out until the end of the year. That’s the typical period you would give your prediction market to run. That means your collateral is locked up for 6 months. That’s really bad for capital efficiency, because you could do other things with that collateral in the meantime.

Angus: Could you use a deposit bond or another type of promise to pay so you’re not tying up your capital?

Friederike: In principle that’s possible, it just makes the trustless design a lot harder because you need to make sure that people don’t over commit. In a way this is what happens with insurance. Insurance markets are predictions on whether certain events will happen, like your house burning down. It pays out if the answer is yes and it doesn’t pay out if the answer is no. The way that the insurance company gets around the capital efficiency problem here is by pooling a lot of similar risks. Actuaries then work out which risks are orthogonal to each other so they’re portfolios are not highly correlated.

Again, in principle you can do that but it’s very difficult to actually make these in a trustless manner, especially for prediction markets. There are types of insurance that run over a very short amount of time. Flight insurance is a good example. If you’re flying out to London next week and you want insurance that pays out if your flight is delayed, the probability may be 10–20% of a delay occurring. You could probably run a prediction market on this risk and it still be capital efficient but for many other things this is less clear.

If you look at markets that have taken off in a big way in the past, they have tended to be markets that are not growth limited in the same way. The stock market is a good example, perhaps not right at the moment. If you were to invest in an index fund without knowing anything about stocks, you would still expect it to go up over the course of 5, 10 or 30 years. This is not the case for prediction markets. Prediction markets operating with fiat money as collateral are inherently zero sum. If I win someone else loses and that’s not a hallmark of markets that take off in a big way.

What I think will happen at some point, and you can mark my words, we will see prediction markets that aren’t backed by fiat collateral.

If I come back to my Elon Musk and Twitter example, and you use Twitter stock and not US dollars as collateral, you need to stake Twitter stock until the end of the year if you think the takeover will be successful and you also need to stake Twitter stock if you don’t. Either position gives you the same exposure to Twitter stock. Holding Twitter stock still exposes you to the eventual outcome of the takeover. If you have a view of what will eventually occur you can hold onto one token and sell the other one, together that would give you a market value for Twitter. This unbundles a specific idiosyncratic risk of a Musk takeover from the all other risks that may affect the value of Twitter stock at any point in time. It becomes a pure vote on a Musk takeover, allowing you to hedge against this particular event whilst maintaining exposure to all other risks and events affecting Twitter .

This will open up an entirely new market. Obviously for this to happen a lot of things need to happen first. You need tokenised stocks, liquid markets for these tokenised stocks, and market makers. I think this will happen in a big way at some point, that’s not now, but perhaps it’s in 5 years’ time.

So we’ve built a decentralised prediction market platform. It’s out there, you can use it, it exists. It’s called Omen. We’ve now moved on to other things.

Angus: In Gnosis Protocol v2 and the CowSwap DEX, you’re using transaction batching and standardised pricing to maximise peer-to-peer (p2p) order flow, and you’re incentivising ‘solvers’ to flip the MEV problem. Is the vision of Gnosis to keep extending the powers of incentivisation to eventually minimise or eliminate MEV?

Friederike: Yes. I think MEV is an invisible tax on users. Charging fees for providing a service is completely fine but these fees should be transparent. 

Where fees are collected covertly it’s the ecosystem that suffers. It’s not a small fee either. I want to be very explicit that we don’t agree with this and we are doing our best to engineer our way out of it for the ecosystem.

Angus: I still can’t believe it still happens in just about every traditional financial market.

Friederike: Yes, but on blockchains the extent to which it’s happening is much larger. There have been analyses of this and it’s amounts to just under 1% of all value transacted, and that’s an enormous covert fee. I know there’s an entire camp of people that defend this as necessary to keep the network stable. It’s very good framing but I don’t think it’s the truth of the matter.

Angus: With regulation of DeFi coming as early as 2024 in Europe (MICA), how can the DeFi industry ensure that any legislation is soundly based and doesn’t restrict or destroy future opportunities?

Friederike: I think I would disagree with the premise of the question. The way you posed the question suggests that there’s going to be a large impact on DeFi and I don’t necessarily think that’s going to be the case. I think regulation, and this goes for any sort of law, is only as good as a state’s ability to enforce it. If society changes too fast and laws are seen as outdated, that’s not a great outcome for society. The powers of decentralisation are strong and they’re here to stay.

Angus: What have been some of the key lessons you’ve learned in relation to governance with GnosisDAO?

Friederike: That’s a really good question. When it comes to governance you can see we’re at the very beginning, these are the very early days. We as humanity have been experimenting with governance for 200,000 years or more. Ever since people have banded together in groups you had some form of rules and governance in play. 

To assume web3 is going to be able to reinvent governance in the space of a couple of years is a very steep assumption. We’re seeing the same problems as everyone else. 

Low voter turnout, a reluctance to engage in the governance process and so on, despite the fact that we’re one of the DAO’s with the highest voter turnout. There are multiple initiatives being tested to encourage higher voter participation like forcing people to delegate their vote, nomination schemes that involve liquid democracy and others. We’re thinking about this very actively.

We have a project within Gnosis called Zodiac that builds DAO tooling. Lots of DAO’s within the Ethereum ecosystem build on top of the Gnosis Safe as a multisig, and Zodiac builds modules that you can install for these safes. Features that traditional software companies would call granular permissions management. An example would be giving individual keys to do some things without needing approval from an entire quorum of the multisig or without having a snapshot vote for something. It gives you the ability to customise what a particular key can be used for and under what conditions it can be revoked.

One of the things we’re using custom keys for is to delegate active treasury management, this includes short-term yield farming and strategic yield farming, to another DAO (Karpatkey). 

They don’t have custody of our funds. They have a whitelist of actions they can execute on the GnosisDAO wallet. They can’t withdraw our funds for instance and they can’t rebalance between different pools and so on. You can tell from how coarse this tooling is that it needs to be improved significantly.

Most democractic societies have a representative democracy, like Germany, Australia, the US and others. There are a few societies that have a direct democracy, like Switzerland. Voting is not compulsory in Switzerland and they also suffer from low voter turnout. You basically vote on everything, and in some cantons you vote on whether citizenship should be granted for every single person that applies.

Angus: Wow, that’s granular!

Friederike: It’s very granular. Another example is the decision to build a playground two streets over for the kids at a school. When voting becomes this granular, not surprisingly you end up with low voter turnout. There’s a sweet spot in between though because most people don’t actually align with one party on every issue.

Societies are complex and there’s a variety of issues where you could delegate your vote to an individual who you feel aligns most closely with your view on that issue. For example, I feel very strongly about civil liberties, but at the same time I’m not a libertarian in the sense that people that can’t provide for themselves shouldn’t be cared for. There should be a social security net but that’s no reason to take civil liberties away from people. I would like to be able to delegate my vote to another person to decide on the science policy we have or the foreign policy that we have. And I think this is where we need to get to with governance in web3.

I think in terms of society at large this will take some time, but web3’s main advantage is that we can run many experiments in parallel and iterate way faster than traditional societies, and you don’t have to have just one system you can support.

Angus: That’s a great call out. And you can have multiple systems that allow different tracks for different endeavours. Does GnosisDAO already have different ways of voting for different initiatives?

Friederike: Not yet. We’re talking about this right now. 

One type of voting we’re looking at is ascent-based voting where there’s a number of people trusted by the DAO who can propose an action that will be automatically approved unless there’s a veto of the proposal. 

Often there are day-to-day proposals that are important for the DAO, like ensuring contributors are paid, that usually always go through. Significant volumes of these small proposals though contribute to voter apathy. It becomes a part-time job just to keep up with governance for a single DAO.

Angus: So, you have low thresholds for proposals that are repetitive and not contentious and high thresholds for proposals that are significant and could be contentious?

Friederike: Yes. These kinds of changes to DAO governance are obviously very simple but they can be made arbitrarily complex because all these voting patterns need to be hard coded. It has to be decided upfront what’s contentious and what’s not. How will each of the voting patterns be changeable? Which ones will require a super majority vote to change and so on.

Angus: Is Gnosis planning for a multi-ecosystem future with cross chain composability or does your focus just extend to the broader Ethereum ecosystem?

Friederike: We’re firm believers in the future of the EVM and the EVM chain. Interoperability is a lot easier between EVM based chains and they have such a large part of the developer mind share. The Cosmos and Polkadot ecosystems for example obviously have smart developers but there’s nowhere near the amount of depth in tooling for these ecosystems. I saw a graph recently in The Block about how much is spent across ecosystems in total and how much is spent per developer. For EVM the cost spent per developer was the lowest because there are such a large number of developers already building on EVM chains.

Angus: A large portion of the Polkadot ecosystem is building on EVM as well. They also have the option of offering WASM at the same time. Doesn’t that make them competitors?

Friederike: No Substrate is different. It’s not a bad system. It’s well designed and in some respects it’s better than the Ethereum system. It’s difficult to build on and it requires a steep learning curve. Developers transitioning across have a hard time and we think that EVM is sticky. That’s kind of our core hypothesis within Gnosis and the Gnosis ecosystem.

Late last year Gnosis merged with xDAI which is currently a proof of authority chain, very close to Ethereum. It’s been around since 2018. It’s now known as Gnosis Chain. We also have another chain called Gnosis Beacon Chain which is a proof of stake chain, like Ethereum will become after the merge, and GNO is the staking token. Our value proposition centres around EVM and being truly decentralised. Gnosis Beacon Chain is the second most decentralised chain after Ethereum.

Ethereum has around 400,000 validators and Gnosis Beacon Chain has 100,000 validators. I believe the next closest ecosystem after that is Cardano with about 3,000 validators. It’s a large jump, and if you think about security and security assumptions, decentralisation is important because otherwise you run the risk of collusion attacks.

The idea behind Gnosis Beacon Chain is that it’s maximally closed to Ethereum, that you can just port things over. If you look at how our transaction fees have changed over the last couple of years, it’s crazy. Almost everything that ever lived on Ethereum has been priced out.

Angus: That’s why I started looking at alternative chains originally because setting up a Gnosis Safe was going to cost me $500 in gas fees.

Friederike: Exactly. Everything that’s not an incredibly high value, and I’m not just talking about non-financial interactions, I’m talking about lower value transactions of less than $50,000. Gas fees may be only 50 basis points on this amount but it’s still something you’d rather not spend. Gnosis Beacon Chain is a home for all these projects that have been priced out of Ethereum.

I’m not a huge fan of DeFi to be honest. I know that we’ve built things that would be classified as DeFi. I’m a huge proponent of open finance. 

Opening avenues to people who previously didn’t have access and lowering barriers to entry and so on. I’m all for that. But the speculative nature of DeFi where it’s money legos, on money legos, on money legos and then everything topples. That produces a couple of percentage points of profits for the same 500 people in the world. That’s not what I get up for in the morning and its not what motivates me. Opening up these applications like money markets and having these for the wider population of the world is a good goal but this is not where DeFi is currently headed.

That’s why I like to differentiate between open finance and DeFi because to me the motivation is different. I think you need those defi primitives in any ecosystem and they exist in the Gnosis ecosystem. The projects the Gnosis Beacon Chain currently centres around doesn’t need these to be sustainable. The projects where there’s 5% more yield to be gained from yield farming happen elsewhere. I would also argue that yield farming is not very sustainable because the capital that it attracts is inherently mercenary. I don’t think it’s a good use of money. We intend to win on culture. 1,500 DAOs live on top of Gnosis Beacon Chain. As do all major Unconditional Basic Income (UBI) projects and payment networks. This is where we’re moving at Gnosis. In principle it’s a general purpose direction. We’re absolutely not headed in the Phantom direction. We’re very much prioritising a social direction of grounded use cases.

Angus: One of the messages I heard repeatedly at Polkadot’s conference last week in Berlin was the need for the collective effort to shift to solving real world problems. What do you see as the key challenges to making this shift?

Friederike: There’s a lot of crypto founders out there that believe in society and making the world a better place. I do think this has been watered down a bit over the last bull market, which is why I’m looking forward to the bear market because it clears out some of those projects that are only chasing the money.

Angus: A lot of projects talk about banking the unbanked, but I don’t think we’re any closer to it. We may be still 5 of 10 years away from achieving that.

Friederike: I agree and that’s where Gnosis Beacon Chain comes in. We also want to bank the unbanked and one the core tenants of this is the UBI.

Angus: Do you have a UBI in Germany or in Europe?

Friederike: We don’t. We have lots of initiatives that kind of push for it. We have a social security net and a minimum wage but it’s not unconditional. You have to prove that you go to job interviews and so on. You also cease getting it when you get a job. The idea behind a UBI is that we don’t have a resource problem we have distribution problem. If you look at how much humanity produces, in principle, it’s enough for everyone. Somehow about 30% of what’s produced goes to waste for no good reason, which leaves you with a distribution problem.

I think that in a world that will change substantially over the next 20 to 30 years a lot of existing jobs are going to be made redundant. I think this is necessary and it’s wonderful that labour intensive, repetitive jobs are being made redundant. It’s frees up the people that used to do these jobs to do more meaningful things. The UBI is the necessary precondition for this to happen.

Angus: What real world problems would you regard as the lowest hanging fruit for web3 entrepreneurs to consider?

Friederike: Payments. It’s funny because we have said that for 10 years. We’ve always said that you’ll pay for your coffee with Bitcoin but no one actually does this because it’s too expensive. If you look at the current costs of transaction processing for everyday payments, like buying lunch for your seven year old, you’re charged 2–3% in card fees. Even if it’s a debit card. This is the lowest hanging fruit that scales up in a meaningful way. We already see this with remittances. A significant amount of remittances and cross border payments are actually done in staples.

Angus: It’s a great onboarding process for people as well to web3. With your COO hat on, where do you find a need for new web3 software to fill gaps or completely replace web2 tools?

Friederike: This is a good question. The standard answer would be everything where you transact value. This is an area where the cost of transactions have in theory been lowered by web3. Setting aside the skyrocketing transaction fees because this is a technical problem that can and will be solved. The next question would be, If you look at web 2.0, you can buy things on the internet, but it always hinges on the fact there’s an intermediary involved that you pay with your money or your data. Not having to do that is one of the core goals of web3.

Angus: Much of DeFi as it stands, is not very decentralised. Some of this may be explained by the particular evolution of a protocol. In other cases it could be explained by the hardness of decentralising everything. Do you think this points to a future where hybrid models play a larger role for longer in the development of web3?

Friederike: I think hybrid models are hard because of regulation. I agree that many projects that say they’re completely decentralised is in reality two 20-something guys with a multisig. It’s still easier to be completely decentralised because otherwise you fall under the purview of one regulator or another. The only way you avoid that is building completely decentralised systems. It’s gets easier though. 

A couple of years ago, the idea that a DAO could own a domain name and have votes on things that are automatically executed on chain would have been preposterous. 

There’s no way a DAO would’ve been able to host something with an ENS and IPFS content. It’s all new. So much has happened and doing things in a truly decentralised way has become easier over the last couple of years from an engineering standpoint.

Angus: What’s the easiest way to onboard people to web3 and how do you think it’s most likely to happen?

Friederike: I don’t think it will be driven by consumers. It will be driven by businesses offering alternative payment systems that reduce payment fees by 2%. Consumers will simply vote with their feet. People don’t know how it works now on web 2.0 and I think it’s going to be the same for blockchain.

Angus: What are some of the pathways that you see for decentralised offerings to start to provide the infrastructure for centralised businesses?

Friederike: There will be decentralised payment rails and transaction settlement rails and everyone will use it. 

What people don’t realise, when something is truly decentralised and belongs to no one in principle, it belongs to everyone. This is the magic of decentralisation.

This allows people who are in principle competitors to coordinate on a much more efficient layer, because there is this impartial layer that belongs to everyone. I think this will be the direction things head in.

Angus: Other than Gnosis, who do you regard as the leading innovators in DeFi right now?

Friederike: In terms of governance, I think Maker. They’ve been decentralised for such a long time, much longer than anyone else. I always have one eye on Maker governance. I think there’s tons of innovations happening in the DAO space, too many to keep up with.

We have good relationships with the DAO’s on the Gnosis Beacon Chain, but we don’t even know all of them when there’s more than 1,500 individual DAO’s. That’s kind of the beauty of it as well. Everyone does their own thing.

Follow us
< Articles

The race to create a DEX that enables cross chain swaps

In a recent trip to Berlin for Polkadot Decoded, the Polkadot ecosystem’s 2022 conference, I made sure I sat down with the CTO of Chainflip, Tom Nash. I’m very appreciative that he chose to give up his time to share his thoughts with Auxilio.

Aug 8, 2022 • 14 mins

Author: Angus Mackay

To introduce Chainflip before we dive into the detail, they’re a decentralised, trustless protocol that enables cross chain swaps between different blockchains and ecosystems. To rephrase that in terms of the customer value proposition, they will potentially allow a user to instantly source the best deal, and the best customer experience to swap crypto assets across multiple ecosystems (Ethereum, Polkadot, Cosmos etc.). It may not sound like it, but it’s a seriously ambitious undertaking that requires some serious tech problem solving.

Angus: What’s makes Chainflip a standout project are the choices you’ve made to integrate technologies across the Ethereum and Polkadot ecosystems, and to not become a Parachain. Can you give us the benefit of your thinking?

Tom: We’ve amalgamated a bunch of different technologies that are best in class to limit what we need to build to produce something that fulfills a specific use case that we see the opportunity to create.

There are a lot of choices that are really easy to make and your encouraged to make them, like build on Substrate, become a Parachain, enjoy shared security, don’t worry about building a validator community, don’t worry about how to incentivise people in an effective way. Many of these choices didn’t really make sense for us.

We don’t benefit from the shared security of Polkadot as we still require staked validators on the Chainflip network to be staked. You can’t run an exchange with $10bn worth of liquidity if you’ve only got $10m worth of collateral because you immediately provide an incentive for people to buy up all of the collateral to take control of the funds. So Chainflip requires validators to be staked and collateralised.

Whilst you can do that with Polkadot and your own Parachain, Collators and Cumulus, it certainly doesn’t make things any simpler for us. In fact it adds a lot of complexity.

Angus: That’s interesting. As a non-technical person you can’t see that. Can you try to explain to us why?

Tom: Sure. There’s usually one aspect to blockchain security and it’s effectively the security of the state transition process. So blockchain is a big database. You want to make sure that all of the rights to that database are authenticated correctly and they follow certain rules. The security of the state transition function is usually provided by the collective stake of the network. In Ethereum’s case this is a bunch of GPU’s mining away. The same goes for Bitcoin.

In Polkadot’s case it’s a bunch of people sitting on loads of DOT and they don’t want their DOT to devalue. Chainflip also has that task. We also need security of our state transition function, but we also require the security of the collateral. So our validators and Chainflip collectively own threshold signature wallets. And we require that these validators have no incentive to collude for the purposes of stealing those funds.

Now the shared security of Polkadot is not tied to security of these DOT. If Chainflip were to leverage the shared security of Polkadot, we’d be delegating the stake to all of those people who hold DOT. And the people that hold DOT are not necessarily the same people who are being incentivised to provide liquidity on the exchange. If we delegated all of our security to DOT validators, but our validators were a different set of people, like collators in the Polkadot ecosystem, and the collators themselves have no stake. The collators are just rolling up the blocks, posting them to Polkadot.

You can force collators to stake but then we’re going back to square one. Why are we using collators and a Parachain if we get no benefit from XCM, which we don’t really. We’re building something at a different layer of abstraction and if we want to support the long tail of XCM assets, then we can just build a front end integration. But the long tail of asset support for Polkadot is not a path that Chainflip wants to go down. You fragment liquidity on the exchange and you force more collateral to have to be deposited in order to support that liquidity. It doesn’t really make sense for us.

“The golden goose for Chainflip are the chains where there is no decentralised liquidity solution at the moment. Chains like Bitcoin, Monero, Zcash, some of the bigger ones that have been left behind by the whole DeFi movement.”

Angus: If I rephrase that in simple terms, a validator in the Chainflip network has two jobs to do – securing the network and securing the liquidity of the network.

Tom: That’s right. Anyone can provide liquidity to the exchange when they trade but the validators that run the network are the ones providing security for that liquidity in two ways. They are one of 150 validators with custodial access to the liquidity and they also secure the state transitions for the AMM.

As I mentioned earlier, If you have $10bn of liquidity, you need some function of that amount as collateral in order to ensure you’re not a honeypot or a target for an economic attack. So if Chainflip were to support all of the Uniswap tokens for example as first party tokens, and people could send those tokens to Chainflip. Then you balloon the amount of liquidity that you’re able to accept and you balloon the amount of liquidity the exchange and the validators are collectively responsible for.

If you do that, you also have to balloon the value of our FLIP token, otherwise the validators are holding onto much less FLIP than they are proportionately providing the liquidity for. If they have $1 of FLIP for every $10 of liquidity they’re securing, things start to look a little out of balance.

Angus: I read in your Docs that the limit of the Frost Signature Schemes you’re using as a custody solution is 150 signatories. Do you expect this to increase as you grow?

Tom: The long term goal would be to scale that number and it’s certainly not a theoretical maximum. It’s a threshold that’s been chosen because it will provide the performance the exchange needs. If you go lower you decrease the level of security, if you go higher you decrease the amount of throughput. So it’s roughly in the Goldilocks zone.

Angus: You’ve chosen not to use the Polkadot messaging protocol (XCM) to facilitate swaps on the Polkadot network or the IBC messaging protocol on the Cosmos chain. Can you tell us what you’re doing instead?

Tom: It doesn’t really feel like it makes sense for us to muck around with the XCM’s and the IBC’s of the world at the moment. I’d love those ecosystems to mature and I’d love for it to be really easy for us to plug into them. You’ve only just started to see XCM channels opened up between Parachains on Polkadot so it’s very early days.

We’ve been building Chainflip for a year and a bit now. It’s just been too late. XCM wasn’t available when we started and there were no plans for it either that we were aware of. If we’d started around now and in 6 months time, it’s very clear how you could leverage XCM, and it’s very clear how to leverage Cosmos’ messaging protocol (IBC), and there’s a bunch of chains to support it, and there’s bunch of projects to support it, maybe then it makes sense. Other projects will have to deal with this whole notion of cross chain communication between something like IBC and XCM.

We’re kind of fundamentally solving a different problem. Chainflip aims to solve the problem of swapping Bitcoin for Ether trustlessly. Projects that use XCM solve the problem of swapping Bitcoin for Polkadot trustlessly. So we’re not really competing. Our competition are the centralised spot markets for these assets like Binance, Coinbase (Pro), Kraken and other central exchanges.

Angus: Ultimately you’re targeting a better user experience than centralised exchanges. Do you expect many to leverage XCM, IBC and other messaging protocols to compete with you?

Tom: Maybe there’ll be a bunch of exchanges that leverage XCM or IBC to do cross chain swaps. It will be very interesting to see if that happens. I suspect that the architecture of a DEX on top of XCM is extremely complicated. You’re going to need lots of oracles. You need lots of people to tell you the price. I’m skeptical and I haven’t seen it yet.

Layer Zero is a really interesting project and they’re most likely to hit our orbit first. They recently released their cross chain messaging tech and it’s cool. It has its faults and its flaws. Again, it doesn’t solve the problem of swapping assets from chain to chain, but I think that’s the likely direction they’re headed and I’d be surprised if they’re not. One of the problems they don’t solve is custody. They claim their technology can be deployed across all these different types of crypto base layers, like Bitcoin for example. I’m certainly really interested to see what they produce next.

Angus: Your vault rotation and creation of new authority sets sounds like a computation heavy process. Is it happening a lot and how do you reduce the need for the process?

Tom: Yes it’s quite inefficient. To produce a new aggregate public key that these 150 nodes have collectively derived and agreed upon requires about 90 seconds. It depends on the cryptography that’s being used under the hood. But it’s certainly not cheap which is why we don’t want it to be happening all of the time. The process is initiated when one of two things happens, either a certain amount of time elapses or a certain amount of nodes goes offline.

So every 28 days, which is probably the right amount of time, a new set of validators is chosen as auction winners. They generate a key, we perform the handover from the old key to the new key, and we have to do that across every single chain that we’re integrated with. Once that process is complete, control has been completely handed over to these new validators for another 28 days.

In the alternative scenario, where Chainflip notices that 20% of validators have dropped offline, we want to avoid a situation where we don’t have at least 100 nodes (or 66% of the 150 nodes) to reach the threshold to sign any transactions, potentially rendering all funds trapped forever. So we kick-off another auction immediately to replace the offline nodes with new validators, ensuring we have a healthy set of nodes again.

Angus: You’ve said that there are a lot of opportunities for AMM ‘s that are running on a custom execution environment. I was wondering if you could explain that to us?

Tom: So Ethereum and other EVM like chains are Turing-complete by design. They are like arbitrary computation platforms. As a result they’re not really efficient. They’re generalised computing machines. You can’t really push them to their limits because you’re working in the embedded systems world, like a micro controller versus an integrated circuit.

A custom execution environment allows you to do a lot more in the context of making the process more efficient. Uniswap for example, isn’t able to write any code that says, let’s tally up all the swaps in a particular block. It’s not able to because of the way the transactions are executed on the AMM, it doesn’t control the underlying execution there.

Chainflip can do that. We have our own validator network. We have our own Mempool. We have our own way of sequencing blocks and we can say we’re going to collect transactions for 10 minutes and then we’re going to match them against each other, and we can execute whatever delta there is on the exchange. So we have a lot more flexibility in that context than any EVM based exchange does.

That’s one of the reasons you’ve seen dYdX very recently come out and say they’re going to build their own blockchain. Everyone’s saying that rollups are the golden goose that’s going to fix everything in the Ethereum network, but they’ve realised that if they were to move to a rollup they still wouldn’t have much control over the underlying execution layer.

You still have to execute everything sequentially. You can do some funky stuff but ultimately you’re still at the whim of the EVM. And also they probably realise that even if they were to move to a rollup they still have to ask users to bridge their funds across to that rollup. And what’s the difference between bridging your funds to a rollup and bridging your funds to Cosmos – not much.

So why not give your users a very similar user experience and also have control over how trades are executed and sequenced. It just makes total sense.

Angus: Is your JIT (Just in time) AMM using batching like Gnosis does to batch transactions to reduce MEV and standardise pricing?

Tom: Yes. We don’t execute everything sequentially as it comes in. We actually did this because when you’re working in a cross chain or cross ecosystem environment, some chains are slower than others. Bitcoin blocks appear every 10 minutes. Ethereum blocks appear every 15 seconds. If you have a pair between some Ethereum based assets say USDC and Bitcoin, and you execute everything sequentially, you’re potentially receiving new USDC deposits every 15 seconds. If you receive a Bitcoin purchase every 15 seconds but a sale every 10 minutes you have a chart that looks very wonky. It looks like a saw wave. It goes up and to the right and then it drops vertically, and this cycle repeats with a 10 minute frequency and that’s not particularly good for users. It creates lots of weird incentives. For example you want to be the first person to get your Bitcoin purchase in after the last Bitcoin block. So what we do is collect all of those swaps, and amalgamate them, and then execute them all at the same clearing price.

Angus: That obviously eliminates the sandwiching that can occur between blocks as well?

Tom: Yes. You also limit opportunities for people that are witnessing upcoming transactions in the Mempool to benefit from trading at the right time and other volume based incentives.

Angus: Chainflip has said that there have been some examples of liquidity providers being incentivised on Uniswap v3 for their large orders. Is this concept largely untested outside of these use cases that you’re aware of?

Tom: It’s a good question. I think so. CowSwap does something very similar to what we want to do. It seems to be working pretty well but not many people use it as a front end. CowSwap doesn’t work the same way as the JIT AMM but it collects a bunch of orders and batches them up so everyone is given the same clearing price over a few blocks. In the context of just in time liquidity, I don’t believe we’ve seen it anywhere other than UniSwap v3. And that’s probably because v3 has a business source licence, so no one’s been able to copy it on the EVM chain. And no one’s yet had the time to build and release an equivalent on a non-EVM execution environment.

Angus: Are there any scale challenges involved initially with providing a minimum level of liquidity for each pair?

Tom: The plan loosely is for funds from the LBP (liquidity bootstrapping tool) to be used to provide liquidity to begin with. Obviously there needs to be liquidity on the exchange to make it useful. As builders of the exchange we will certainly be helping as many people as we can to become proficient liquidity providers. We have a bunch of people lined up to provide liquidity on the exchange. We’ll be helping them to make that profitable. We’ll be trying to drive as much volume as we can through swaps and so on. It remains to be seen exactly what the challenges will be, but given the nature of the exchange, I don’t think there will be too many challenges because of the amount of capital efficiency that we can provide.

I think the bigger challenge will be attracting the right volumes. I don’t think liquidity will be a problem to begin with. The challenge will be making that liquidity feel like Chainflip is a good home for validators by growing our volume.

Angus: With the EU intending to introduce MICA regulations as early as 2024, how do you anticipate this will impact the value proposition of DeFi?

Tom: If it affects the value proposition of DeFi then that product is not DeFi. Maker, arguably one of the most successful product to exist in the DeFi ecosystem is not going to fall victim to this problem. Anyone can build a front end for Chainflip and anyone can build an integration with it. User funds are not held custodially by any legal entity. They don’t have to trust the bridge for any longer than their funds are passing through it. And it would be very, very hard to regulate the product.

Angus: What about if you have retail customers using the product or if you’re domiciled outside of the net?

Tom: If Chainflip Labs, the company, is running a method of interfacing with the exchange, I’m sure there’s probably a bunch of arguments that you can make that Chainflip is providing financial services. If you’re that way inclined you can probably lobby for Chainflip to be caught in the regulatory net. Chainflip doesn’t have to run that front end. Chainflip can ask people to build it. It’s then up to them if they host it in Singapore or Dubai or another country that’s more crypto friendly.

Ultimately Chainflip Labs could end up interfacing with regulators, but the product itself has been released as an open source piece of software and it’s can’t be stopped by regulation.

Angus: You said at the end of your roadmap that this is just the beginning. What are some of the ways you envisage expanding beyond the use cases of cross chain swaps?

Tom: That’s a good question at the moment. I’m fascinated by the tech that we’re building. The threshold signature scheme that we built and the multi-party compensation protocol I’m sure has use cases outside of cryptocurrency. I’m more interested in that than I am in figuring out how Chainflip could be used for NFTs.

Angus: Do you see it having applications for B2B relationships?

Tom: Business to business relationships are incredibly inefficient, or at least it seems that way. I think anywhere there is a shared desire to produce common agreements between a set of untrusted parties could be a potential use case for our technology.

It’s extremely efficient, pretty robust, lightweight, all things considered. Tackling the shared custody problem is easily one of the most interesting things about the problem we’re solving.

Angus: What problems do you see crying out to be solved?

Tom: Privacy. Privacy of the underlying history of the blockchain, the under lying transactions. The average web3 user has multiple wallets, and over a period of 10 years, there’s lots of transactions that have potentially been made with these wallets. If you’re still using these wallets, all the transactions from 10 years ago are still there. That might be desirable, but I think it probably isn’t for most people. If you could go and wipe your transaction history or conceal them moving forward that would be great. If you buy a questionable NFT, you might not want your grand kids to know. The right to be forgotten is really interesting.

Technology is moving in a direction where you don’t have that right. It wasn’t codified in from the start. We don’t have a bill of digital rights and so there’s a lot of information our there about each individual person that they probably don’t even know about. It will become more of a talking point for my kids and the next generation. I see that as a big greenfield opportunity and a big selling point for future technology companies.

You’re seeing it a little bit now with people wanting to shift towards greater privacy but it’s slow. Signal’s had its time in the sun over the past year. Email providers have as well, like Proton Mail, but even they’ve ended up helping law enforcement recently. What it comes down to though is it’s really difficult to solve this problem in the first place. So I think that zero knowledge technology is going to play a huge role in that. I hope the tech industry over the next 10 years tends towards incorporating more of this technology into its products and services.

Follow us