< Articles

The history of US credit surveillance and it’s impact on privacy

I recently read a fantastic book by Josh Lauer titled Creditworthy: A History of Consumer Surveillance and Financial Identity in America. It’s easy to assume that privacy standards started to be whittled away with the broad take-up of the internet and social media. The reality is starkly different.

May 25, 2023 • 18 mins

Author: Angus Mackay

Commercial Credit (1840 – 1890)

Prior to the introduction of any formal commercial credit system across the US, credit was extended primarily on the basis of character, while capacity and capital made up the “three Cs”. Every moral quantification of a merchant increased his credit: reputation for honesty, work ethic, perseverance, industry expertise, marital status, religion etc. This trust was a function of familiarity, creditworthiness and reputation. This was a stark contrast to the system of credit in Europe which hinged on collateral, land and inherited fortunes.

The speed of the capitalist transformation of America in the early nineteenth, the shortages of circulating currency, and the increasingly far-flung and complex credit relationships that supported it meant the liberality of the American credit system was a practical necessity.

By the 1830’s, as migration bought growing numbers to the cities and populations spread inland, traditional forms of credit assessment were breaking down. This was a major problem for city merchants, especially importers, manufacturers, wholesalers, and other middlemen who sold to country retailers and tradespeople each spring and fall. With so much business at stake there was considerable pressure to trust people of unknown and unverifiable reputation.

The panic of 1837 underscored the risks. A cascade of defaulting debts wiped out investments, wrecked businesses and crippled the American economy. Lewis Tappan who ran a silk wholesaling firm in New York was bankrupted by uncollectable debts. He also saw an opportunity to transform the credit system and launched the Mercantile Agency in 1841 with a goal of compiling detailed information about business owners across the country. The Mercantile Agency quickly became synonymous with commercial credit and served as model for future competitors.

Tappan’s innovation was to transform the business community’s collective knowledge into a centralised, subscription-based reporting service. Key to the agency’s success was the use of unpaid correspondents instead of expensive lone travelling reporters. Most members of this vast network were attorneys who filed reports in exchange for referrals to prosecute debt collections in their communities. The agency had 300 correspondents by 1844, 700 by 1846 and more than 10,000 by the early 1870s across nearly 30 offices, including several in Canada and one in London.

While usual correspondence between reporters, branches and the main office was completed by mail to limit costs, news of “serious embarrassments, assignments and failures” were immediately telegraphed. Until coded reference books appeared in the late 1850s, subscribers – wholesalers, merchants, banks, and insurance companies – received credit information only in the offices of the Mercantile Agency. Seeking a competitive edge, at least one major wholesaler strung its own telegraph line to one of the mercantile agencies, establishing the first system of real-time credit authorisation.

By the late nineteenth century specialised reporting firms were also formed to serve a variety of industries that included manufacturers of iron and steel, jewellery, furniture, shoe and leather, and construction materials. A Chicago journalist in 1896 wrote about “Barry’s Book” which listed 35,000 retail lumbermen and 2,000 wholesale dealers, “it’s so thorough and comprehensive that a retail lumberman out in a Dakota village can’t stand the milkman off for half a dollar’s worth of tickets without every wholesale lumberman in the country being appraised of the fact before night”.

There were major problems with these early systems though. The quality of the intelligence was often an issue when much amounted to little more than hearsay, could be corrupted by young lawyers seeking to gain favour with the mercantile agency to win work, and often went back more than 18 years. Converting qualitative reports into quantitative fact could be difficult as comments were open to interpretation and lacked context. While it was easy to identify the extremes of the business community it was the vast middle range that proved troublesome. Importantly, if you were subject to less than favourable character reports there was very little you could do about it. There were no laws to protect you, no way to view or appeal your own report, and libel suits at the time were lengthy endeavours that were all ultimately decided in favour of the agencies. The introduction of full-time credit reporters in the 1860s went some way to solving these issues. Efforts to compel business owners to share signed financial statements were resisted or ignored well into the 1890s.

Tappan sold out of the agency to several associates in 1854 and the firm was renamed R. G. Dun and Company. In 1868 Dun’s chief rival was the Bradstreet Company. The two companies competed aggressively until 1933 when they merged to become Dun & Bradstreet (now Illion).

The credit bureaus and reporting firms together formed an elaborate system of mass surveillance. No government agency in America had anything like the intelligence gathering reach of the mercantile agencies at the time. To compound this further, the information wasn’t being collected for internal administration but to sell to anyone that had a need to use it.

Consumer Credit (1890 – 1950)

When the norms of commercial credit were transferred to consumer credit during the late nineteenth century, the cocoon of privacy surrounding personal finance was similarly punctured. Retail credit reporting was almost entirely based upon an assessment of the borrower’s character and financial behaviour. A lot of these firms didn’t last for long though as the costs of collecting information was high and a gargantuan effort was involved to update reference books to ensure their accuracy. They also couldn’t control the books being shared and were reliant on small businesses whose record keeping was often very poor.

The financial crisis of 1893 provided the impetus for professionalised credit management which in turn had an enormous influence on the institutionalisation of consumer credit surveillance. Large retailers introduced credit managers who codified principles and standards for supervising customer accounts. They increasingly leveraged telephones for credit checks and the telautograph (early fax machine) and teletext technologies that came after to improve the speed, accuracy and efficiency of credit departments and credit bureaus. The National Association of Credit Men was founded in 1896 but never numbered more than 200 members primarily because of the natural distrust amongst competitive retailers. It did spawn another grouping organised around the profession, The Retail Credit Men’s National Association, that grew to more than 10,000 members across 40 states and 230 cities by 1921. By the mid 1920s there were credit files on tens of millions of Americans.

As judge and jury in lending decisions, credit managers wielded enormous power over the lives and fortunes of consumers as they balanced the expectations of salesman with keeping losses at or below 1 percent of sales. Credit professionals were expected to have a grasp of all spheres of activity with even the slightest bearing on the future financial conditions of his customers, from local weather conditions to national monetary policy. Fear was the primary motivator but they also had to keep the customer’s goodwill and even make themselves their confidant and advisor, so they would voluntarily keep them informed of their “condition”.

A retail credit department wouldn’t extend credit until the customer had an interview with their credit manager. These interviews became a referendum on a customer’s morality and social standing. A national association’s educational director observed “I know of no single thing, save the churches, which has so splendid a moral influence in the community as does a properly organised and effectively operated credit bureau”. To be refused credit implied they were undeserving, deficient or suspect. When credit was granted, a credit limit was assigned without the customer’s knowledge and could be adjusted over time to control credit risk and reduce the need for constant credit approvals.

Unlike highly secretive 21st century credit reporting, early credit bureaus and associations went to great lengths to publicise their work. The message of credit morality, delivered via mass media and in countless private consultations throughout the country, equated credit and character in explicit terms. Credit managers used World War 1 to join their message of credit responsibility and national allegiance to introduce thirty-day billing cycles. “Prompt Pay” and “Pay Up” campaigns were run in cities and towns across America to amplify these messages. The messaging over the next decade then shifted to one of “credit consciousness”.

In addition to their regular customer credit files, many credit departments and bureaus kept “watchdog” cabinets. These files included compromising information about divorces, bankruptcies, lawsuits, and accounts of irresponsible behaviour sourced from newspapers, court records and public notices. This information might not have an immediate impact on a customer’s credit standing but it could be used against them in the future.

By the 1920s, credit managers were no longer simply tracking customers and making authorisations, they were also mining their rich repositories of customer data for the purpose of targeted sales promotions. In this way systematic credit management began to develop into an instrument of social classification and control with broader implications.

The fact that credit reporting up until this point had generated so little public reaction was surprising. Occupation, with its connection to income both in terms of amount and regularity biased salaried professionals with an established credit history and good references. Geography and buying patterns were other metrics used to make generalisations and race and ethnic prejudice were codified as standard operating metrics to discriminate amongst applicants. All these early efforts were blunt punitive tools used to identify the best and worst cohorts in the community. They also created, supported and sustained social prejudice in many of its forms across America.

By the 1930s, credit managers began to realise the value of “customer control”. Their most profitable credit risks were already on their books, and it was far more difficult and expensive to acquire new customers. Customer control developed in credit departments rather than marketing departments because they controlled all the insights. Personalised letter campaigns enjoyed high conversion rates and were just as effective to reengage inactive customers as they were to capture a higher share of wallet from their best customers across more departments. As customer control became more sophisticated, retailers attempted to further differentiate their credit customers by price line, taste and average expenditures to regain the personal touch that had been lost in the development of impersonal mass retailing.

Despite the prevalence of a national credit reporting system, many retailers in the 1940s continued to grant credit on the basis of an honest face or a personal reference. Many retailers didn’t have their own credit departments or full-time credit managers. Barron’s reported in 1938 that only 10% of small business owners employed a credit manager, though 91% offered credit terms to their customers. By the start of World War 2 credit bureaus had files on 60 million people.

Spy’s and Social Warriors

Credit bureaus were essential to the banks and oil companies launching aggressive credit card programs in the 1950s that meant the credit departments of the large retailers no longer had a monopoly on the insights of their customer’s buying habits.

By the early twentieth century two basic types of credit reports dominated. Trade clearances were the least expensive and most time sensitive reports and included only basic identifying information like name, spouse’s name, current address, and ledger data consisting of a list of open accounts, the maximum balances and the customer’s history of repaying. Antecedent reports were comprehensive summaries of an individual’s personal, financial and employment history. Typical information included race, age, marital status, number of dependents, current and former employees and positions held, salary and other sources of income, bank account information, any bankruptcy or legal actions and whether they owned or rented. They also included terse commentary on home life, personality, reputation etc.

There were also two styles used for data collection. The Retail Credit Company (RCC) was a national reporting agency who employed local investigators, primarily in the insurance business and would later in 1976 be renamed Equifax. Their approach relied on a direct investigation via interviews rather than shared ledger data and extended to the family members, former employers, references, club members, neighbours and former neighbours, financial and professional people. By 1968 the RCC had more than 30 branches and employed more than 6,300 “inspectors”.

In contrast the Association of Credit Bureaus of America (ACB of A) was a network of independent affiliated members and relied on customer account information and payment histories submitted to the bureaus by local creditors. They also conducted direct investigations to flesh out new or old files and spoke to employers, banks and landlords. Landlords were especially prized because they often included comments on personal habits and manners as well as rent payment records, number of dependents and history at the address. They would also speak with neighbours, the local grocer or drug store to fill in any gaps.

Bureau investigators were advised to cultivate trusted informants, to meet privately to ensure confidentiality and to conceal notebooks or pre-printed forms that might arouse suspicion. They also created derogatory reports that included any information that could be used to discredit an applicant from press clippings, public records or member feedback. These reports were used to fill the gaps in real time information and were cheaper than ordering a full report. They included news clippings on accidents and calamities and even stillborn or premature births on the logic that such events usually involve new debts to doctors, hospitals and morticians. Bureau clerks would also descend on courthouses and municipal offices daily to copy all records pertaining to real estate, civil and criminal lawsuits, tax assessments and liens, bankruptcies, indictments and arrests. This air of espionage would come under harsh criticism during the late 1960s.

Many bureaus systematically repackaged and resold their information to third parties – insurance companies, employers, car dealers, landlords and law enforcement. Employer reports could be sold for more than three times the price of a credit report. Some bureaus began branching into new promotional services, including pre-screening programs and the sale of customer lists.

Many post war bureaus also ran promotional business by offering “newcomer” services based on the Welcome Wagon business promotion model thinly disguised as a charitable civic association. Hostesses would greet a new family when they moved into town to offer them helpful information on schools, shopping, churches and other community amenities. The primary purpose was to convince them to sign up for credit services in their local town. With either a completed application or a former address in hand, investigations could be made to determine whether complimentary charge accounts would be offered. In some places these services were integrated into local credit card programs often run by the credit bureaus.

Dallas’ Chilton bureau was unique in its aggressive effort to marry credit surveillance with credit promotion. It issued its own credit card in 1958 and sold pre-screened lists of top-rated customers that could be sorted by mailing zones, then by income, age, profession, renter or property owner. This type of targeted marketing and consumer analytics was far ahead of its time and is now a core service offered by all three major credit bureaus. The Dallas bureau’s rapid expansion in the 1960s with the purchase of more than 40 bureaus and the computerisation of its credit reporting network and list marketing business, eventually led to the creation of Experian.

Both credit reporting agencies and credit managers were motivated to maintain confidentiality. It protected their competitive positions and helped them to avoid libel suits. Applicants would never be given a specific reason why they were rejected, and only told their credit report was incomplete or insufficient. They were also never able to see their own credit record, and in any interview a credit manager had with those who wanted an explanation, it was used as another opportunity to enhance the applicant’s credit file.

However, in their perpetual quest to vanquish deadbeats and to safeguard the nation’s credit economy, bureau operators saw themselves as patriotic agents of social change. Bureau files were completely opened to government officials in the spirit of public service. This started shortly after World War 1 and turned into a paid relationship in 1937 when the US Department of Justice contracted with the National Consumer Credit Reporting Corporation for the FBI. A symbiotic relationship developed because bureaus were dependent on government agencies for access to courthouses and municipal offices and public agencies relied on detailed investigative reports to run loan programs like those of the Federal Housing Authority and Veteran Affairs. Non-credit granting agencies, like the FBI and IRS, can still get access to bureau files today with a court order.

The industry faced lawmakers and popular condemnation in the late 1960s. Surprisingly, the appropriateness of fundamental metrics of credit reporting weren’t bought into question. It was the lack of rules around the collection, sharing and correction of credit information that was the focus. The Fair Reporting Credit Act (FRCA) of 1970 and the Consumer Protection Act of 1968 placed restrictions on these activities but the underlying logic that a credit record was relevant to employment and insurance risks didn’t change. These included:

  • Credit reports could only be purchased for a “legitimate business need”.
  • Bureaus had to disclose the “nature and substance” of an individual’s file as well as the sources if requested by the customer.
  • Adverse items were to be deleted after 7 years (14 years for bankruptcies).
  • If you were denied credit, insurance or employment, or your insurance premiums were raised on the basis of an adverse credit report, you had to notified and given the contact information of the credit bureau.
  • Individuals for whom an investigative report was ordered had to be notified.
  • For employment reports, applicants had to be notified if the report included an adverse public record.

Serious problems quickly emerged around the interpretation of the FRCA laws that left a lot open to the discretion and abuse of credit bureaus. The laws did even less to protect privacy because government agencies could still obtain basic information without a court order and the content of the bureau files was unrestricted. Since credit bureau violations were very difficult to prove, and long-standing privileged communication laws protected them from defamation, they had little fear of being sued and therefore no incentive to rein in their information collection practices.

Relevance was another problem for credit evaluation because it wasn’t clear where financial information ended and non-financial information began, and they informed each other. This was only partially addressed by the Equal Credit Opportunity Act (ECOA) in 1974 that made it illegal to deny credit on the basis of gender or marital status. Race, religion and age could still be used to discriminate against applicants even though many lenders had already removed these from their scoring models. This was complicated by an ECOA requirement to give each declined credit application a specific reason for their rejection. Lenders argued it was impossible, but they were more concerned about defending some of the metrics they were relying on. If more metrics were excluded from their models it could have an impact on their profits. Even when gender or marital status was removed from models there were so many “secondary” variables that offered reliable proxies for these variables.

Lenders were relying on information collected from the credit application alone until the late 1980s primarily because it reduced their spending with the credit bureaus, despite providing an opportunity to reduce some of the discriminatory nature of credit decisions. The practice of not including an applicant’s past payment history in a credit decision came in for heavy criticism which eventually led to their inclusion.

Computerisation (1950s & 60s)

The automation of banking and accounting during the late 1950s caused many to predict that a paperless office was on the horizon and a single identification card would replace checks and cash and link a person’s bank, credit and personal information. Although a checkless society was technically possible, the problem of personal identification was insurmountable. Mistaken identity and fraud had already become a major problem after World War 2. It also required a new high speed surveillance infrastructure to identify and monitor the nation’s population.

To combat these fraud issues, verification services started to proliferate:

  • Validator was adopted by Macys in 1966 and allowed sales clerks to authorise credit purchases by entering a customer’s account number, into an electronic desktop device.
  • Telecredit was used by Carson, Pirie, Scott and was a combination touch tone phone and audio response unit connected to a computer.
  • Veri-Check was created by the Dallas Chilton bureau and combined a database of names, physical descriptions and up to date check cashing histories for local individuals.

The first mover in the computerisation of credit records was Credit Data Corporation (CDC), a private firm based in California. It was led by Harry Jordan, a graduate in biophysics that returned home to run the family business, The Michigan Merchants Credit Association, after his father’s death in 1956. Rather than working to gain the cooperation of local bureaus across the country he focused on the wealth and mobility of three regionals zones that accounted for 43% of the US population: LA to San Francisco, Boston to Washington and Chicago to Buffalo.

CDC digitised it’s LA bureau at a cost of $3 million in 1965 using IBM 1401 computers leased at a cost of $20,000 per month. The bureau provided reports in 90 seconds at a cost of 63 cents per enquiry (33 cents if no record could be produced). The next year the cost of an enquiry fell to 22 cents. By 1967 CDC boasted state-wide coverage with more than 11 million records. The first step to achieving their goal to permit rapid access to credit information from anywhere in the US occurred in 1969 when the New York office could remotely access credit data from LA.

While centralised reporting spelled the end for the local credit bureau, it was widely assumed in the mid-1960s that banks, retailers and credit card companies would be better positioned to dominate credit reporting. Retailers gradually joined banks as heavy computer users but were not at the forefront of the adoption of new technologies. Banks were far ahead of all other major stakeholders in the market for consumer credit information and had already revolutionised check clearing. What held bankers back though was their conservatism and a long history of strict client confidentiality. In contrast, credit bureaus’ only privacy concerns came from not sharing their best customers. From the beginning many banks declined to participate in local reporting ventures and even when they did they shared only basic identifying information and ballpark figures. Instead, banks relied on their own in-house records and intrabank relationships for customer information.

The problem for the banks at the time though was the sheer volume of consumer credit that their customers were accumulating across charge accounts and credit cards. It was difficult to know a loan applicant’s true financial position without engaging with a credit bureau. In 1965, ACB of A members received about 10% of their income from banks. The mysterious ascent of CDC can be explained by its main subscribers: banks as well as credit card issuing oil companies and national credit card issuers like American Express and Bank of America. CDCs decision to exclude any information about a person’s race, religion, psychological profile or personality, medical records or gossip culled from neighbours was a crucial point of differentiation. The banking industry’s defence of consumer privacy was quite forward looking. This unfortunately didn’t prevent a race to bring American consumers under an increasingly totalising system of private sector surveillance.

There was a furious response across the credit bureau industry as they could see the potential to be cut out of the credit reporting loop altogether. In 1966, a new consortium, Computer Reporting Systems Inc., was formed in LA to help Southern California bureaus compete with CDC and a few years later was computerizing and linking more than 40 bureaus in Arizona, Nevada and Southern California. By 1967, Credit Bureaus Inc. was computerising records across 40 bureaus in Oregon, California, Washington and Idaho. ACB of A’s challenge was that computerisation meant centralisation. Regardless, the Dallas Chilton bureau had IBM 360 computers set up in 1966. The Dallas bureau alone reduced 1 million records stored in 300m2 of office space to a data cell the size of an office waste basket.

Computerisation importantly led to a new alphanumeric “common language” to describe creditworthiness that would address the inconsistencies of historical credit reporting. The most important of these was COBOL (Common Business Orientated Language) that used a reduced character set and naturalistic English language commands. There was still a common problem to be solved though: how to identify an individual. The use of the Social Security number by federal agencies was already expanding in the 1960s and no rules prevented its use outside of government. It’s adoption more broadly was legitimised by the American Bankers Association when in 1968 it recommended it be used for a nationwide personal identification system.

Credit Scoring

Statistical credit scoring developed in the late 1950s as business consultants and researchers began using computers to create sophisticated scoring systems for major banks and retailers. Computer-assisted credit scoring caused a fundamental shift in the concept and language of creditworthiness, even more so than computerised reporting. Interaction between credit managers and applicants ceased and risk management shifted from a focus on the individual to being a function of abstract statistical risk.

Fair, Isaac and Company succeeded in bringing credit scoring into mainstream commercial practice using the same discriminant analysis used by David Durand in his 1941 study on consumer finance. Founded by two former analysts from the Stanford Research Institute, William Fair was an electrical engineer and Earl Isaac a mathematician. They received a break when one of the nation’s leading consumer finance companies, American Investment Company (AIC) hired them to analyse their credit files.

General Electric Credit Corporation (GECC) had invested $125 million developing credit scoring by 1965 and by 1968, a third of the top 200 banks were using credit scoring and another third considering it. The promise of credit scoring though took time to develop as the determinants of creditworthiness needed to be identified. Credit applications of 40 or more questions became fishing expeditions for worthy metrics. A second problem faced by early adopters was maintaining the validity of scoring systems across different populations. No two businesses were the same because no two businesses dealt with exactly the same borrowers. Even small behavioural or socioeconomic differences could skew results at the risk of disastrous miscalculations. Over time, small changes in a statistical pool of customers caused a model to lose its predictive power. Continual resampling was necessary to adjust the models.

By converting credit decisions into uniform, quantitative rules, scoring systems also allowed managers to quickly generate reports that displayed patterns of profit or loss and the risk of its various portfolios. The comprehensive managerial information system created from credit scoring almost rivalled its main function in importance. While risk-based interest rate pricing was explored and was standard practice in the insurance industry, the real power of credit scoring came from its ability to allow creditors to push credit risk to the furthest limit at the lowest margin and the development of “subprime” lending.

The CDC was acquired by TRW in 1968, a defence industry giant that engineered nuclear missiles and satellites, because they saw an opportunity to acquire a big proprietary database and a dominant position. Over the next two decades, TRW and other large computerised bureaus would develop a dizzying array of credit screening and marketing programs. The FCRA, with it modest requirements and loose definitions, did nothing to address the larger forces that were shaping the future of consumer surveillance.

The rise of data brokers (1970s – 1990s)

During the 1970s databases proliferated and became integral to everyday life in America. By 1974 the government was operating more than 800 databases with more than a 1 billion records on their citizens. Without a legal framework to limit data sharing in any way, consumer information became a gold rush commodity that circulated rapidly between a dizzying array of commercial interests. RCCs name change in 1976 to Equifax signalled this shift to very big consumer data. Each new venture of the big 3 data brokers (incl. TransUnion and Experian) fed data back to their parent entities that boosted customer insights. By 1980, 70% of all consumer credit reports were provided by 5 bureaus. By the end of the 1980s, Equifax earned more 10-20% of its revenue from pre-screening programs designed to convince merchants to accept them and customers to use them.

The integration of credit bureau data into credit scoring models in the mid-1980s allowed the take-up of generic credit models because risk models based on this data were generalisable. Prior to this each bank, retailer or business had to build its own credit scoring models at a cost of $50,000 – $100,000 that then needed to be constantly resampled. In 1989, Fair Isaac introduced a new credit model that would become the industry standard that translated risk rankings into a FICO score between 300 and 900, with higher numbers representing the least risk of default. The adoption of these models by the large listed mortgage lenders, Freddie Mac and Fannie Mae, in 1995 institutionalised the approach.

As credit risk became passe the new focus became customer profitability. It was no longer enough to clear the credit hurdles, you had to clear the “lifetime value” hurdle as well. By the late 1990s all 3 major data brokers had their own risk modelling units and there were literally dozens of generic models that predicted risk and profitability. Delinquency alert models allowed lenders to continuously track the risk and performance of an individual across all their accounts. New scoring models were developed for utility companies, telephone carriers, car dealers, property insurers and healthcare providers.

Credit bureaus finally turned to consumers themselves to sell services that helped them monitor their credit reputations. TRW (Experian) was the first in 1986 with a $35 per year subscription that offered access to your own credit reports, automatic notifications when your credit records were queried, and a special service for cancelling lost or stolen credit cards. The program was heavily advertised and quickly enrolled 250,000 subscribers in California alone. Critics of the program noted you could already buy a copy of your credit report for a nominal fee, and bureaus were simple shifting their costs to consumers, because its necessity was driven by their own inability to protect their data. They also used the service to gather yet more information from subscribers they could then repackage and sell.

The release of the Lotus Marketplace CD that sold for $700 in 1990 and contained profiles for 120 million adults and 80 million households that combined census data, postal service data, consumer surveys and Equifax’s own credit files, led to the first example of a privacy protest in America. Equifax decided to scuttle the product because of the uproar it caused but didn’t understand what all the fuss was about when all of this information on consumers was readily available. Credit bureaus had to abandon their marketing list businesses in the 1990s as they were slapped with multiple government lawsuits, but a key concession allowed them to continue to sell personal information (name, phone, zip code, birth date, social security number etc.). By then this kind of information was even being sold by the banks. The boundaries between credit bureaus and financial institutions had collapsed. In 1999, the CEO of Sun Microsystems, Scott McNealy, summed it up bluntly: “You have zero privacy anyway – get over it”.

Closing Thoughts

If the issue of privacy has been such a low priority for so long in the US then it does make you wonder why it took until 2016 for the European Union to create it’s General Data Protection Regulations. The 3 dominant data brokers have been exploiting a lack of coordination and development on global privacy laws for decades.

Australia has a great opportunity after the recent review of our privacy laws to make amends for the slow pace of development of our privacy rights. It will also present a huge commercial opportunity for the forward thinking amongst us if we do.

Follow us
< Articles

Gnosis and the future of web3

In a recent trip to Berlin, I was very excited and extremely appreciative to have the opportunity to meet with the COO of Gnosis, Friederike Ernst. The Gnosis team have led so many movements in web3 so it’s great to hear her thoughts on how they see the future of web3.

Aug 31, 2022 • 16 mins

Author: Angus Mackay

Angus: My first question is about Futarchy and prediction markets. Prediction markets are easily explored where there are objective questions to be answered. Where the questions posed are more subjective it’s hard to boil an answer down to a single metric. Does Gnosis believe there are very broad applications for prediction markets?

Friederike: So, at the core, that’s an Oracle problem. How do you ascertain what is true and what is not, and often that’s a difficult question. It’s actually one question that we’ve never tried to solve. We’ve mostly always used trustless Oracle providers where anyone can take a view on how something will turn out, and if other people don’t agree, there’s an escalation game. This process continues and attracts more money so a collective truth can be found. The idea is that you basically don’t need the escalation game, but just the fact that it’s there, keeps people honest.

Angus: Other than the sports and political use cases that you usually associate with prediction markets, what do you see are the opportunities to expand their application?

Friederike: Our core business model has pivoted away from prediction markets. I actually do think there’s a large arena in which we’ll see prediction markets happen and play out in the future. I just think there’s a paradigm shift that needs to happen first.

As prediction markets work today, you use a fiat currency as collateral (Euros, USD, AUD etc.) and you have a question. If the question turns out to be true then the people holding the YES token win the collective stake of those people who hold the NO token. If the question turns out to be false then the people holding the NO token win the collective stake of those people who hold the YES token.

Typically prediction markets run over quite some time. An example would be will Elon Mush actually buy Twitter? Lets say this plays out until the end of the year. That’s the typical period you would give your prediction market to run. That means your collateral is locked up for 6 months. That’s really bad for capital efficiency, because you could do other things with that collateral in the meantime.

Angus: Could you use a deposit bond or another type of promise to pay so you’re not tying up your capital?

Friederike: In principle that’s possible, it just makes the trustless design a lot harder because you need to make sure that people don’t over commit. In a way this is what happens with insurance. Insurance markets are predictions on whether certain events will happen, like your house burning down. It pays out if the answer is yes and it doesn’t pay out if the answer is no. The way that the insurance company gets around the capital efficiency problem here is by pooling a lot of similar risks. Actuaries then work out which risks are orthogonal to each other so they’re portfolios are not highly correlated.

Again, in principle you can do that but it’s very difficult to actually make these in a trustless manner, especially for prediction markets. There are types of insurance that run over a very short amount of time. Flight insurance is a good example. If you’re flying out to London next week and you want insurance that pays out if your flight is delayed, the probability may be 10–20% of a delay occurring. You could probably run a prediction market on this risk and it still be capital efficient but for many other things this is less clear.

If you look at markets that have taken off in a big way in the past, they have tended to be markets that are not growth limited in the same way. The stock market is a good example, perhaps not right at the moment. If you were to invest in an index fund without knowing anything about stocks, you would still expect it to go up over the course of 5, 10 or 30 years. This is not the case for prediction markets. Prediction markets operating with fiat money as collateral are inherently zero sum. If I win someone else loses and that’s not a hallmark of markets that take off in a big way.

What I think will happen at some point, and you can mark my words, we will see prediction markets that aren’t backed by fiat collateral.

If I come back to my Elon Musk and Twitter example, and you use Twitter stock and not US dollars as collateral, you need to stake Twitter stock until the end of the year if you think the takeover will be successful and you also need to stake Twitter stock if you don’t. Either position gives you the same exposure to Twitter stock. Holding Twitter stock still exposes you to the eventual outcome of the takeover. If you have a view of what will eventually occur you can hold onto one token and sell the other one, together that would give you a market value for Twitter. This unbundles a specific idiosyncratic risk of a Musk takeover from the all other risks that may affect the value of Twitter stock at any point in time. It becomes a pure vote on a Musk takeover, allowing you to hedge against this particular event whilst maintaining exposure to all other risks and events affecting Twitter .

This will open up an entirely new market. Obviously for this to happen a lot of things need to happen first. You need tokenised stocks, liquid markets for these tokenised stocks, and market makers. I think this will happen in a big way at some point, that’s not now, but perhaps it’s in 5 years’ time.

So we’ve built a decentralised prediction market platform. It’s out there, you can use it, it exists. It’s called Omen. We’ve now moved on to other things.

Angus: In Gnosis Protocol v2 and the CowSwap DEX, you’re using transaction batching and standardised pricing to maximise peer-to-peer (p2p) order flow, and you’re incentivising ‘solvers’ to flip the MEV problem. Is the vision of Gnosis to keep extending the powers of incentivisation to eventually minimise or eliminate MEV?

Friederike: Yes. I think MEV is an invisible tax on users. Charging fees for providing a service is completely fine but these fees should be transparent. 

Where fees are collected covertly it’s the ecosystem that suffers. It’s not a small fee either. I want to be very explicit that we don’t agree with this and we are doing our best to engineer our way out of it for the ecosystem.

Angus: I still can’t believe it still happens in just about every traditional financial market.

Friederike: Yes, but on blockchains the extent to which it’s happening is much larger. There have been analyses of this and it’s amounts to just under 1% of all value transacted, and that’s an enormous covert fee. I know there’s an entire camp of people that defend this as necessary to keep the network stable. It’s very good framing but I don’t think it’s the truth of the matter.

Angus: With regulation of DeFi coming as early as 2024 in Europe (MICA), how can the DeFi industry ensure that any legislation is soundly based and doesn’t restrict or destroy future opportunities?

Friederike: I think I would disagree with the premise of the question. The way you posed the question suggests that there’s going to be a large impact on DeFi and I don’t necessarily think that’s going to be the case. I think regulation, and this goes for any sort of law, is only as good as a state’s ability to enforce it. If society changes too fast and laws are seen as outdated, that’s not a great outcome for society. The powers of decentralisation are strong and they’re here to stay.

Angus: What have been some of the key lessons you’ve learned in relation to governance with GnosisDAO?

Friederike: That’s a really good question. When it comes to governance you can see we’re at the very beginning, these are the very early days. We as humanity have been experimenting with governance for 200,000 years or more. Ever since people have banded together in groups you had some form of rules and governance in play. 

To assume web3 is going to be able to reinvent governance in the space of a couple of years is a very steep assumption. We’re seeing the same problems as everyone else. 

Low voter turnout, a reluctance to engage in the governance process and so on, despite the fact that we’re one of the DAO’s with the highest voter turnout. There are multiple initiatives being tested to encourage higher voter participation like forcing people to delegate their vote, nomination schemes that involve liquid democracy and others. We’re thinking about this very actively.

We have a project within Gnosis called Zodiac that builds DAO tooling. Lots of DAO’s within the Ethereum ecosystem build on top of the Gnosis Safe as a multisig, and Zodiac builds modules that you can install for these safes. Features that traditional software companies would call granular permissions management. An example would be giving individual keys to do some things without needing approval from an entire quorum of the multisig or without having a snapshot vote for something. It gives you the ability to customise what a particular key can be used for and under what conditions it can be revoked.

One of the things we’re using custom keys for is to delegate active treasury management, this includes short-term yield farming and strategic yield farming, to another DAO (Karpatkey). 

They don’t have custody of our funds. They have a whitelist of actions they can execute on the GnosisDAO wallet. They can’t withdraw our funds for instance and they can’t rebalance between different pools and so on. You can tell from how coarse this tooling is that it needs to be improved significantly.

Most democractic societies have a representative democracy, like Germany, Australia, the US and others. There are a few societies that have a direct democracy, like Switzerland. Voting is not compulsory in Switzerland and they also suffer from low voter turnout. You basically vote on everything, and in some cantons you vote on whether citizenship should be granted for every single person that applies.

Angus: Wow, that’s granular!

Friederike: It’s very granular. Another example is the decision to build a playground two streets over for the kids at a school. When voting becomes this granular, not surprisingly you end up with low voter turnout. There’s a sweet spot in between though because most people don’t actually align with one party on every issue.

Societies are complex and there’s a variety of issues where you could delegate your vote to an individual who you feel aligns most closely with your view on that issue. For example, I feel very strongly about civil liberties, but at the same time I’m not a libertarian in the sense that people that can’t provide for themselves shouldn’t be cared for. There should be a social security net but that’s no reason to take civil liberties away from people. I would like to be able to delegate my vote to another person to decide on the science policy we have or the foreign policy that we have. And I think this is where we need to get to with governance in web3.

I think in terms of society at large this will take some time, but web3’s main advantage is that we can run many experiments in parallel and iterate way faster than traditional societies, and you don’t have to have just one system you can support.

Angus: That’s a great call out. And you can have multiple systems that allow different tracks for different endeavours. Does GnosisDAO already have different ways of voting for different initiatives?

Friederike: Not yet. We’re talking about this right now. 

One type of voting we’re looking at is ascent-based voting where there’s a number of people trusted by the DAO who can propose an action that will be automatically approved unless there’s a veto of the proposal. 

Often there are day-to-day proposals that are important for the DAO, like ensuring contributors are paid, that usually always go through. Significant volumes of these small proposals though contribute to voter apathy. It becomes a part-time job just to keep up with governance for a single DAO.

Angus: So, you have low thresholds for proposals that are repetitive and not contentious and high thresholds for proposals that are significant and could be contentious?

Friederike: Yes. These kinds of changes to DAO governance are obviously very simple but they can be made arbitrarily complex because all these voting patterns need to be hard coded. It has to be decided upfront what’s contentious and what’s not. How will each of the voting patterns be changeable? Which ones will require a super majority vote to change and so on.

Angus: Is Gnosis planning for a multi-ecosystem future with cross chain composability or does your focus just extend to the broader Ethereum ecosystem?

Friederike: We’re firm believers in the future of the EVM and the EVM chain. Interoperability is a lot easier between EVM based chains and they have such a large part of the developer mind share. The Cosmos and Polkadot ecosystems for example obviously have smart developers but there’s nowhere near the amount of depth in tooling for these ecosystems. I saw a graph recently in The Block about how much is spent across ecosystems in total and how much is spent per developer. For EVM the cost spent per developer was the lowest because there are such a large number of developers already building on EVM chains.

Angus: A large portion of the Polkadot ecosystem is building on EVM as well. They also have the option of offering WASM at the same time. Doesn’t that make them competitors?

Friederike: No Substrate is different. It’s not a bad system. It’s well designed and in some respects it’s better than the Ethereum system. It’s difficult to build on and it requires a steep learning curve. Developers transitioning across have a hard time and we think that EVM is sticky. That’s kind of our core hypothesis within Gnosis and the Gnosis ecosystem.

Late last year Gnosis merged with xDAI which is currently a proof of authority chain, very close to Ethereum. It’s been around since 2018. It’s now known as Gnosis Chain. We also have another chain called Gnosis Beacon Chain which is a proof of stake chain, like Ethereum will become after the merge, and GNO is the staking token. Our value proposition centres around EVM and being truly decentralised. Gnosis Beacon Chain is the second most decentralised chain after Ethereum.

Ethereum has around 400,000 validators and Gnosis Beacon Chain has 100,000 validators. I believe the next closest ecosystem after that is Cardano with about 3,000 validators. It’s a large jump, and if you think about security and security assumptions, decentralisation is important because otherwise you run the risk of collusion attacks.

The idea behind Gnosis Beacon Chain is that it’s maximally closed to Ethereum, that you can just port things over. If you look at how our transaction fees have changed over the last couple of years, it’s crazy. Almost everything that ever lived on Ethereum has been priced out.

Angus: That’s why I started looking at alternative chains originally because setting up a Gnosis Safe was going to cost me $500 in gas fees.

Friederike: Exactly. Everything that’s not an incredibly high value, and I’m not just talking about non-financial interactions, I’m talking about lower value transactions of less than $50,000. Gas fees may be only 50 basis points on this amount but it’s still something you’d rather not spend. Gnosis Beacon Chain is a home for all these projects that have been priced out of Ethereum.

I’m not a huge fan of DeFi to be honest. I know that we’ve built things that would be classified as DeFi. I’m a huge proponent of open finance. 

Opening avenues to people who previously didn’t have access and lowering barriers to entry and so on. I’m all for that. But the speculative nature of DeFi where it’s money legos, on money legos, on money legos and then everything topples. That produces a couple of percentage points of profits for the same 500 people in the world. That’s not what I get up for in the morning and its not what motivates me. Opening up these applications like money markets and having these for the wider population of the world is a good goal but this is not where DeFi is currently headed.

That’s why I like to differentiate between open finance and DeFi because to me the motivation is different. I think you need those defi primitives in any ecosystem and they exist in the Gnosis ecosystem. The projects the Gnosis Beacon Chain currently centres around doesn’t need these to be sustainable. The projects where there’s 5% more yield to be gained from yield farming happen elsewhere. I would also argue that yield farming is not very sustainable because the capital that it attracts is inherently mercenary. I don’t think it’s a good use of money. We intend to win on culture. 1,500 DAOs live on top of Gnosis Beacon Chain. As do all major Unconditional Basic Income (UBI) projects and payment networks. This is where we’re moving at Gnosis. In principle it’s a general purpose direction. We’re absolutely not headed in the Phantom direction. We’re very much prioritising a social direction of grounded use cases.

Angus: One of the messages I heard repeatedly at Polkadot’s conference last week in Berlin was the need for the collective effort to shift to solving real world problems. What do you see as the key challenges to making this shift?

Friederike: There’s a lot of crypto founders out there that believe in society and making the world a better place. I do think this has been watered down a bit over the last bull market, which is why I’m looking forward to the bear market because it clears out some of those projects that are only chasing the money.

Angus: A lot of projects talk about banking the unbanked, but I don’t think we’re any closer to it. We may be still 5 of 10 years away from achieving that.

Friederike: I agree and that’s where Gnosis Beacon Chain comes in. We also want to bank the unbanked and one the core tenants of this is the UBI.

Angus: Do you have a UBI in Germany or in Europe?

Friederike: We don’t. We have lots of initiatives that kind of push for it. We have a social security net and a minimum wage but it’s not unconditional. You have to prove that you go to job interviews and so on. You also cease getting it when you get a job. The idea behind a UBI is that we don’t have a resource problem we have distribution problem. If you look at how much humanity produces, in principle, it’s enough for everyone. Somehow about 30% of what’s produced goes to waste for no good reason, which leaves you with a distribution problem.

I think that in a world that will change substantially over the next 20 to 30 years a lot of existing jobs are going to be made redundant. I think this is necessary and it’s wonderful that labour intensive, repetitive jobs are being made redundant. It’s frees up the people that used to do these jobs to do more meaningful things. The UBI is the necessary precondition for this to happen.

Angus: What real world problems would you regard as the lowest hanging fruit for web3 entrepreneurs to consider?

Friederike: Payments. It’s funny because we have said that for 10 years. We’ve always said that you’ll pay for your coffee with Bitcoin but no one actually does this because it’s too expensive. If you look at the current costs of transaction processing for everyday payments, like buying lunch for your seven year old, you’re charged 2–3% in card fees. Even if it’s a debit card. This is the lowest hanging fruit that scales up in a meaningful way. We already see this with remittances. A significant amount of remittances and cross border payments are actually done in staples.

Angus: It’s a great onboarding process for people as well to web3. With your COO hat on, where do you find a need for new web3 software to fill gaps or completely replace web2 tools?

Friederike: This is a good question. The standard answer would be everything where you transact value. This is an area where the cost of transactions have in theory been lowered by web3. Setting aside the skyrocketing transaction fees because this is a technical problem that can and will be solved. The next question would be, If you look at web 2.0, you can buy things on the internet, but it always hinges on the fact there’s an intermediary involved that you pay with your money or your data. Not having to do that is one of the core goals of web3.

Angus: Much of DeFi as it stands, is not very decentralised. Some of this may be explained by the particular evolution of a protocol. In other cases it could be explained by the hardness of decentralising everything. Do you think this points to a future where hybrid models play a larger role for longer in the development of web3?

Friederike: I think hybrid models are hard because of regulation. I agree that many projects that say they’re completely decentralised is in reality two 20-something guys with a multisig. It’s still easier to be completely decentralised because otherwise you fall under the purview of one regulator or another. The only way you avoid that is building completely decentralised systems. It’s gets easier though. 

A couple of years ago, the idea that a DAO could own a domain name and have votes on things that are automatically executed on chain would have been preposterous. 

There’s no way a DAO would’ve been able to host something with an ENS and IPFS content. It’s all new. So much has happened and doing things in a truly decentralised way has become easier over the last couple of years from an engineering standpoint.

Angus: What’s the easiest way to onboard people to web3 and how do you think it’s most likely to happen?

Friederike: I don’t think it will be driven by consumers. It will be driven by businesses offering alternative payment systems that reduce payment fees by 2%. Consumers will simply vote with their feet. People don’t know how it works now on web 2.0 and I think it’s going to be the same for blockchain.

Angus: What are some of the pathways that you see for decentralised offerings to start to provide the infrastructure for centralised businesses?

Friederike: There will be decentralised payment rails and transaction settlement rails and everyone will use it. 

What people don’t realise, when something is truly decentralised and belongs to no one in principle, it belongs to everyone. This is the magic of decentralisation.

This allows people who are in principle competitors to coordinate on a much more efficient layer, because there is this impartial layer that belongs to everyone. I think this will be the direction things head in.

Angus: Other than Gnosis, who do you regard as the leading innovators in DeFi right now?

Friederike: In terms of governance, I think Maker. They’ve been decentralised for such a long time, much longer than anyone else. I always have one eye on Maker governance. I think there’s tons of innovations happening in the DAO space, too many to keep up with.

We have good relationships with the DAO’s on the Gnosis Beacon Chain, but we don’t even know all of them when there’s more than 1,500 individual DAO’s. That’s kind of the beauty of it as well. Everyone does their own thing.

Follow us
< Articles

The race to create a DEX that enables cross chain swaps

In a recent trip to Berlin for Polkadot Decoded, the Polkadot ecosystem’s 2022 conference, I made sure I sat down with the CTO of Chainflip, Tom Nash. I’m very appreciative that he chose to give up his time to share his thoughts with Auxilio.

Aug 8, 2022 • 14 mins

Author: Angus Mackay

To introduce Chainflip before we dive into the detail, they’re a decentralised, trustless protocol that enables cross chain swaps between different blockchains and ecosystems. To rephrase that in terms of the customer value proposition, they will potentially allow a user to instantly source the best deal, and the best customer experience to swap crypto assets across multiple ecosystems (Ethereum, Polkadot, Cosmos etc.). It may not sound like it, but it’s a seriously ambitious undertaking that requires some serious tech problem solving.

Angus: What’s makes Chainflip a standout project are the choices you’ve made to integrate technologies across the Ethereum and Polkadot ecosystems, and to not become a Parachain. Can you give us the benefit of your thinking?

Tom: We’ve amalgamated a bunch of different technologies that are best in class to limit what we need to build to produce something that fulfills a specific use case that we see the opportunity to create.

There are a lot of choices that are really easy to make and your encouraged to make them, like build on Substrate, become a Parachain, enjoy shared security, don’t worry about building a validator community, don’t worry about how to incentivise people in an effective way. Many of these choices didn’t really make sense for us.

We don’t benefit from the shared security of Polkadot as we still require staked validators on the Chainflip network to be staked. You can’t run an exchange with $10bn worth of liquidity if you’ve only got $10m worth of collateral because you immediately provide an incentive for people to buy up all of the collateral to take control of the funds. So Chainflip requires validators to be staked and collateralised.

Whilst you can do that with Polkadot and your own Parachain, Collators and Cumulus, it certainly doesn’t make things any simpler for us. In fact it adds a lot of complexity.

Angus: That’s interesting. As a non-technical person you can’t see that. Can you try to explain to us why?

Tom: Sure. There’s usually one aspect to blockchain security and it’s effectively the security of the state transition process. So blockchain is a big database. You want to make sure that all of the rights to that database are authenticated correctly and they follow certain rules. The security of the state transition function is usually provided by the collective stake of the network. In Ethereum’s case this is a bunch of GPU’s mining away. The same goes for Bitcoin.

In Polkadot’s case it’s a bunch of people sitting on loads of DOT and they don’t want their DOT to devalue. Chainflip also has that task. We also need security of our state transition function, but we also require the security of the collateral. So our validators and Chainflip collectively own threshold signature wallets. And we require that these validators have no incentive to collude for the purposes of stealing those funds.

Now the shared security of Polkadot is not tied to security of these DOT. If Chainflip were to leverage the shared security of Polkadot, we’d be delegating the stake to all of those people who hold DOT. And the people that hold DOT are not necessarily the same people who are being incentivised to provide liquidity on the exchange. If we delegated all of our security to DOT validators, but our validators were a different set of people, like collators in the Polkadot ecosystem, and the collators themselves have no stake. The collators are just rolling up the blocks, posting them to Polkadot.

You can force collators to stake but then we’re going back to square one. Why are we using collators and a Parachain if we get no benefit from XCM, which we don’t really. We’re building something at a different layer of abstraction and if we want to support the long tail of XCM assets, then we can just build a front end integration. But the long tail of asset support for Polkadot is not a path that Chainflip wants to go down. You fragment liquidity on the exchange and you force more collateral to have to be deposited in order to support that liquidity. It doesn’t really make sense for us.

“The golden goose for Chainflip are the chains where there is no decentralised liquidity solution at the moment. Chains like Bitcoin, Monero, Zcash, some of the bigger ones that have been left behind by the whole DeFi movement.”

Angus: If I rephrase that in simple terms, a validator in the Chainflip network has two jobs to do – securing the network and securing the liquidity of the network.

Tom: That’s right. Anyone can provide liquidity to the exchange when they trade but the validators that run the network are the ones providing security for that liquidity in two ways. They are one of 150 validators with custodial access to the liquidity and they also secure the state transitions for the AMM.

As I mentioned earlier, If you have $10bn of liquidity, you need some function of that amount as collateral in order to ensure you’re not a honeypot or a target for an economic attack. So if Chainflip were to support all of the Uniswap tokens for example as first party tokens, and people could send those tokens to Chainflip. Then you balloon the amount of liquidity that you’re able to accept and you balloon the amount of liquidity the exchange and the validators are collectively responsible for.

If you do that, you also have to balloon the value of our FLIP token, otherwise the validators are holding onto much less FLIP than they are proportionately providing the liquidity for. If they have $1 of FLIP for every $10 of liquidity they’re securing, things start to look a little out of balance.

Angus: I read in your Docs that the limit of the Frost Signature Schemes you’re using as a custody solution is 150 signatories. Do you expect this to increase as you grow?

Tom: The long term goal would be to scale that number and it’s certainly not a theoretical maximum. It’s a threshold that’s been chosen because it will provide the performance the exchange needs. If you go lower you decrease the level of security, if you go higher you decrease the amount of throughput. So it’s roughly in the Goldilocks zone.

Angus: You’ve chosen not to use the Polkadot messaging protocol (XCM) to facilitate swaps on the Polkadot network or the IBC messaging protocol on the Cosmos chain. Can you tell us what you’re doing instead?

Tom: It doesn’t really feel like it makes sense for us to muck around with the XCM’s and the IBC’s of the world at the moment. I’d love those ecosystems to mature and I’d love for it to be really easy for us to plug into them. You’ve only just started to see XCM channels opened up between Parachains on Polkadot so it’s very early days.

We’ve been building Chainflip for a year and a bit now. It’s just been too late. XCM wasn’t available when we started and there were no plans for it either that we were aware of. If we’d started around now and in 6 months time, it’s very clear how you could leverage XCM, and it’s very clear how to leverage Cosmos’ messaging protocol (IBC), and there’s a bunch of chains to support it, and there’s bunch of projects to support it, maybe then it makes sense. Other projects will have to deal with this whole notion of cross chain communication between something like IBC and XCM.

We’re kind of fundamentally solving a different problem. Chainflip aims to solve the problem of swapping Bitcoin for Ether trustlessly. Projects that use XCM solve the problem of swapping Bitcoin for Polkadot trustlessly. So we’re not really competing. Our competition are the centralised spot markets for these assets like Binance, Coinbase (Pro), Kraken and other central exchanges.

Angus: Ultimately you’re targeting a better user experience than centralised exchanges. Do you expect many to leverage XCM, IBC and other messaging protocols to compete with you?

Tom: Maybe there’ll be a bunch of exchanges that leverage XCM or IBC to do cross chain swaps. It will be very interesting to see if that happens. I suspect that the architecture of a DEX on top of XCM is extremely complicated. You’re going to need lots of oracles. You need lots of people to tell you the price. I’m skeptical and I haven’t seen it yet.

Layer Zero is a really interesting project and they’re most likely to hit our orbit first. They recently released their cross chain messaging tech and it’s cool. It has its faults and its flaws. Again, it doesn’t solve the problem of swapping assets from chain to chain, but I think that’s the likely direction they’re headed and I’d be surprised if they’re not. One of the problems they don’t solve is custody. They claim their technology can be deployed across all these different types of crypto base layers, like Bitcoin for example. I’m certainly really interested to see what they produce next.

Angus: Your vault rotation and creation of new authority sets sounds like a computation heavy process. Is it happening a lot and how do you reduce the need for the process?

Tom: Yes it’s quite inefficient. To produce a new aggregate public key that these 150 nodes have collectively derived and agreed upon requires about 90 seconds. It depends on the cryptography that’s being used under the hood. But it’s certainly not cheap which is why we don’t want it to be happening all of the time. The process is initiated when one of two things happens, either a certain amount of time elapses or a certain amount of nodes goes offline.

So every 28 days, which is probably the right amount of time, a new set of validators is chosen as auction winners. They generate a key, we perform the handover from the old key to the new key, and we have to do that across every single chain that we’re integrated with. Once that process is complete, control has been completely handed over to these new validators for another 28 days.

In the alternative scenario, where Chainflip notices that 20% of validators have dropped offline, we want to avoid a situation where we don’t have at least 100 nodes (or 66% of the 150 nodes) to reach the threshold to sign any transactions, potentially rendering all funds trapped forever. So we kick-off another auction immediately to replace the offline nodes with new validators, ensuring we have a healthy set of nodes again.

Angus: You’ve said that there are a lot of opportunities for AMM ‘s that are running on a custom execution environment. I was wondering if you could explain that to us?

Tom: So Ethereum and other EVM like chains are Turing-complete by design. They are like arbitrary computation platforms. As a result they’re not really efficient. They’re generalised computing machines. You can’t really push them to their limits because you’re working in the embedded systems world, like a micro controller versus an integrated circuit.

A custom execution environment allows you to do a lot more in the context of making the process more efficient. Uniswap for example, isn’t able to write any code that says, let’s tally up all the swaps in a particular block. It’s not able to because of the way the transactions are executed on the AMM, it doesn’t control the underlying execution there.

Chainflip can do that. We have our own validator network. We have our own Mempool. We have our own way of sequencing blocks and we can say we’re going to collect transactions for 10 minutes and then we’re going to match them against each other, and we can execute whatever delta there is on the exchange. So we have a lot more flexibility in that context than any EVM based exchange does.

That’s one of the reasons you’ve seen dYdX very recently come out and say they’re going to build their own blockchain. Everyone’s saying that rollups are the golden goose that’s going to fix everything in the Ethereum network, but they’ve realised that if they were to move to a rollup they still wouldn’t have much control over the underlying execution layer.

You still have to execute everything sequentially. You can do some funky stuff but ultimately you’re still at the whim of the EVM. And also they probably realise that even if they were to move to a rollup they still have to ask users to bridge their funds across to that rollup. And what’s the difference between bridging your funds to a rollup and bridging your funds to Cosmos – not much.

So why not give your users a very similar user experience and also have control over how trades are executed and sequenced. It just makes total sense.

Angus: Is your JIT (Just in time) AMM using batching like Gnosis does to batch transactions to reduce MEV and standardise pricing?

Tom: Yes. We don’t execute everything sequentially as it comes in. We actually did this because when you’re working in a cross chain or cross ecosystem environment, some chains are slower than others. Bitcoin blocks appear every 10 minutes. Ethereum blocks appear every 15 seconds. If you have a pair between some Ethereum based assets say USDC and Bitcoin, and you execute everything sequentially, you’re potentially receiving new USDC deposits every 15 seconds. If you receive a Bitcoin purchase every 15 seconds but a sale every 10 minutes you have a chart that looks very wonky. It looks like a saw wave. It goes up and to the right and then it drops vertically, and this cycle repeats with a 10 minute frequency and that’s not particularly good for users. It creates lots of weird incentives. For example you want to be the first person to get your Bitcoin purchase in after the last Bitcoin block. So what we do is collect all of those swaps, and amalgamate them, and then execute them all at the same clearing price.

Angus: That obviously eliminates the sandwiching that can occur between blocks as well?

Tom: Yes. You also limit opportunities for people that are witnessing upcoming transactions in the Mempool to benefit from trading at the right time and other volume based incentives.

Angus: Chainflip has said that there have been some examples of liquidity providers being incentivised on Uniswap v3 for their large orders. Is this concept largely untested outside of these use cases that you’re aware of?

Tom: It’s a good question. I think so. CowSwap does something very similar to what we want to do. It seems to be working pretty well but not many people use it as a front end. CowSwap doesn’t work the same way as the JIT AMM but it collects a bunch of orders and batches them up so everyone is given the same clearing price over a few blocks. In the context of just in time liquidity, I don’t believe we’ve seen it anywhere other than UniSwap v3. And that’s probably because v3 has a business source licence, so no one’s been able to copy it on the EVM chain. And no one’s yet had the time to build and release an equivalent on a non-EVM execution environment.

Angus: Are there any scale challenges involved initially with providing a minimum level of liquidity for each pair?

Tom: The plan loosely is for funds from the LBP (liquidity bootstrapping tool) to be used to provide liquidity to begin with. Obviously there needs to be liquidity on the exchange to make it useful. As builders of the exchange we will certainly be helping as many people as we can to become proficient liquidity providers. We have a bunch of people lined up to provide liquidity on the exchange. We’ll be helping them to make that profitable. We’ll be trying to drive as much volume as we can through swaps and so on. It remains to be seen exactly what the challenges will be, but given the nature of the exchange, I don’t think there will be too many challenges because of the amount of capital efficiency that we can provide.

I think the bigger challenge will be attracting the right volumes. I don’t think liquidity will be a problem to begin with. The challenge will be making that liquidity feel like Chainflip is a good home for validators by growing our volume.

Angus: With the EU intending to introduce MICA regulations as early as 2024, how do you anticipate this will impact the value proposition of DeFi?

Tom: If it affects the value proposition of DeFi then that product is not DeFi. Maker, arguably one of the most successful product to exist in the DeFi ecosystem is not going to fall victim to this problem. Anyone can build a front end for Chainflip and anyone can build an integration with it. User funds are not held custodially by any legal entity. They don’t have to trust the bridge for any longer than their funds are passing through it. And it would be very, very hard to regulate the product.

Angus: What about if you have retail customers using the product or if you’re domiciled outside of the net?

Tom: If Chainflip Labs, the company, is running a method of interfacing with the exchange, I’m sure there’s probably a bunch of arguments that you can make that Chainflip is providing financial services. If you’re that way inclined you can probably lobby for Chainflip to be caught in the regulatory net. Chainflip doesn’t have to run that front end. Chainflip can ask people to build it. It’s then up to them if they host it in Singapore or Dubai or another country that’s more crypto friendly.

Ultimately Chainflip Labs could end up interfacing with regulators, but the product itself has been released as an open source piece of software and it’s can’t be stopped by regulation.

Angus: You said at the end of your roadmap that this is just the beginning. What are some of the ways you envisage expanding beyond the use cases of cross chain swaps?

Tom: That’s a good question at the moment. I’m fascinated by the tech that we’re building. The threshold signature scheme that we built and the multi-party compensation protocol I’m sure has use cases outside of cryptocurrency. I’m more interested in that than I am in figuring out how Chainflip could be used for NFTs.

Angus: Do you see it having applications for B2B relationships?

Tom: Business to business relationships are incredibly inefficient, or at least it seems that way. I think anywhere there is a shared desire to produce common agreements between a set of untrusted parties could be a potential use case for our technology.

It’s extremely efficient, pretty robust, lightweight, all things considered. Tackling the shared custody problem is easily one of the most interesting things about the problem we’re solving.

Angus: What problems do you see crying out to be solved?

Tom: Privacy. Privacy of the underlying history of the blockchain, the under lying transactions. The average web3 user has multiple wallets, and over a period of 10 years, there’s lots of transactions that have potentially been made with these wallets. If you’re still using these wallets, all the transactions from 10 years ago are still there. That might be desirable, but I think it probably isn’t for most people. If you could go and wipe your transaction history or conceal them moving forward that would be great. If you buy a questionable NFT, you might not want your grand kids to know. The right to be forgotten is really interesting.

Technology is moving in a direction where you don’t have that right. It wasn’t codified in from the start. We don’t have a bill of digital rights and so there’s a lot of information our there about each individual person that they probably don’t even know about. It will become more of a talking point for my kids and the next generation. I see that as a big greenfield opportunity and a big selling point for future technology companies.

You’re seeing it a little bit now with people wanting to shift towards greater privacy but it’s slow. Signal’s had its time in the sun over the past year. Email providers have as well, like Proton Mail, but even they’ve ended up helping law enforcement recently. What it comes down to though is it’s really difficult to solve this problem in the first place. So I think that zero knowledge technology is going to play a huge role in that. I hope the tech industry over the next 10 years tends towards incorporating more of this technology into its products and services.

Follow us