The Reality Coefficient - How To Know Oneself in an Age of AI

 
W. Scott Stornetta is Chairman of SureMark Digital. Considered by many to be the co-inventor of blockchain technology, he is a partner at Yugen Partners, a blockchain and AI-focused venture capital firm, and a fellow at the Creative Destruction Lab.

John D’Agostino is Head of Strategy at Coinbase Institutional and a Lecturer at MIT and Columbia University. You may follow him on X @johnjdagostino. The authors would like to thank Omid Malekan and Lewis Cohen for their contributions in writing this essay.

“The further a society drifts from the truth, the more it will hate those who speak it.”

  • Selwyn Duke

 

The GINI coefficient, a statistical measure of economic inequality, represents the (unequal) distribution of income, wealth, or consumption within a country or social group. A relatively high coefficient is generally considered non-optimal—outside of Ayn Rand devotees—but most modern capitalist economies struggle with its seeming inevitability and resistance to remediation.

A visualization of the blockchain technology concept. / Source: Guliver Image

The advent of AI creates a corollary to this wealth metric, one we term the Reality Coefficient. AI, and more specifically, access to cutting-edge and expensive AI, will increasingly create an ever-widening knowledge distribution chasm between those who can control what appears real and those who will be forced to either accept what they see as real or exist in a state of intellectual stasis, never knowing what is fact and what is fiction. This chasm is not purely technical—the enormous costs (in terms of capital investment, energy, and access to data) involved in training cutting-edge Large Language Models threaten to create a limited aristocratic class with access to systems that are wildly smarter and faster than what others can access.

 

This chasm has obvious long-term social, economic, geopolitical, and national security implications, most notably around the use of deepfakes to destabilize election outcomes. Although, based on what we know now, the 2024 U.S. Presidential election did not appear to be heavily influenced by the use of AI or deepfakes, there is no question that as we move forward, those with a political agenda will attempt to use this technology to influence and potentially disrupt democratic processes around the world. These threats can come from bad actors (internal and external) seeking to sway an outcome for personal gain or from those simply seeking to test their skills—meaning those with no discernible political or economic motive beyond chaos.

 

Compounding this problem is that as computing power has grown, so has distrust in authority, perhaps by design. This mistrust of centralized systems is particularly acute over critically important issues like politics and the economy. The Washington Post found that over 70 percent of respondents, fairly evenly distributed across Democrats and Republicans, mistrust the mainstream media’s polling results, while Pew Research recorded in a recent survey that public trust of government is at all time measured lows. 

 

This is troubling, as conflict based on a commonly understood fact basis may escalate to catastrophe. However, over time, norms consolidate, certain ideals prevail, and society moves on based on a shared understanding of what is considered fact or truth at a given point in time—with the understanding that this truth may later change or evolve.

 

Conflict without even a base foundation of common understanding, however, escalates and infects society with incurable mistrust. As AI hurtles forward, we hope to build equally scalable counter-systems to maintain balance. If the level of trust a society has towards its centralized authoritative entities (both governmental and commercial) is inversely correlated to the wealth and reality gap, we don’t need AI to cause a major rift in future elections.

 

What if we could slow and then stop at least one wildly detrimental effect of widespread AI—specifically, the loss of confidence in identity? Photoshop (developed in 1987) did not, ultimately, render photographic evidence (in society or law) obsolete, perhaps because computing power and connectivity were limited enough at its inception to allow society the time to absorb the power of that technology and adapt our systems and thinking accordingly.

 

We do not have the same luxury of time with AI when it comes to its impact on photographic, video, and interpersonal (farewell, Turing Test) elements of truth. We have to expect that future election cycles, in the U.S. and abroad, will contain myriad examples of near-perfect simulations. If enough of these simulations infect the locus of true data, we are left with a morass of confusion, where fake data is believed not because viewers have been convinced, but because they have been fooled into trusting a source they thought was credible but was, in fact, an impersonation. In this world, both fake and real data seem equally suspicious and indiscernible.

 

So, what can be done to slow down the infection, to allow society time to produce the “antibodies”? As with the original work on the early blockchain, Dr. Haber and I believe that the key is both technological and social, where truth of identity (as a starting point) can be ascertained through immutable referral databases—the wisdom of curated crowds, if you will. Imagine if technology existed that could replicate years of social interaction by confirming a web of real-world references that greatly limit the ability to impersonate or create identities. If we understand identity, and, by extension, understand (at a minimum) the true and verified source of data/information, then our ability to ascertain the truth of that communication can operate as a function of our trust in that source.

 

AI will impact all our lives, in both the digital and physical realms. Its impact will be felt everywhere. However, the greatest journey begins with a single step. As such, we, and this article, focus on one simple problem: the self.

 

We believe that establishing the identity of content creators lies at the heart of unwinding the deepfake problem. We are less concerned about whether content has been synthesized or altered from its original state than about who is willing to take responsibility for the product. In other words, when it comes to visual or audio proof, identity provenance will be more important than objective truth, as the latter will be a function of belief and perspective. This is not without precedent: in law, the concept of evidentiary provenance is on par with evidentiary truth. If a defendant can cast doubt on the integrity of a piece of evidence, the ultimate truth of that evidence becomes potentially irrelevant. After all, any digital content nowadays is processed by computer hardware running software algorithms and is, in at least one sense, manipulated—think of the heavily filtered Instagram photos so many of us post on a near-daily basis. What has not changed is that some person is ultimately responsible for the decisions made as to how to alter the content in producing it. And so, connecting the identity of the owner with the content forms the basis of deciding what content to trust. If we could always identify the persons creating particular content, we could begin to make our own judgments about how truthful the content creator is and how much of what they produce can be relied on.

 

While that will be a different decision for each person who views content, it is not at all different from what we did in a world before it became so heavily digital. Namely, we interacted with individuals, listened to what they had to say, perhaps read what they wrote, and, on the basis of those interactions, formed opinions of both them as sources of information and the information they produced. We also indirectly formed opinions of those more removed from us based on what others, whom we did know, told us about them. We identified people based on face-to-face interactions and their appearance, mannerisms, and voice—a luxury we no longer have. We argue that it is this distance from experienced identity verification and association, more than technical advancements in content manipulation, that has driven society’s increasing mistrust in digital content. We need a new mechanism appropriate for an age dominated by digital communication containing infinitely and cheaply AI-generated, highly synthetic content to confirm identity without the benefit of being face-to-face.

 

The issue, then, is to formalize in a technological sense what we already do in an informal way with our network of social associations, but to find a way to reliably bind the identity of creators with the content they create, regardless of the means used for its creation. We are happy to note that such a system is in operation today. Not only does it use conventional blockchains as a universal ledger to backstop this effort, but it also applies the key concepts of blockchains to solve the identity problem.

 

Few people know this, but blockchain technology was originally invented to solve a similar problem of provenance. The original vision, conceived decades ago (and 15 years before the invention of Bitcoin), was as a solution to the modern version of a challenge people have grappled with for millennia: how to conclusively authenticate (without relying on a centralized entity) when a digital document was first created and last edited. For physical media, the problem was solved with signatures, seals, and third-party notaries. But how does one notarize a Word file, one that can easily be edited immediately after the original content was attested to? If digital data is the new primary form by which we store and pass on knowledge, then the digital economy needs a way to timestamp underlying data so that whatever form it ultimately takes can be verified in its authenticity.

 

Blockchains are particularly suitable for this goal. The practical immutability of the data stored through blockchain networks stems from these networks’ unique structure and cryptographic principles. When information is added to a blockchain, it is bundled into a block along with other recent transactions. This block then undergoes a complex mathematical process called cryptographic hashing, generating a unique digital fingerprint. The newly created block is linked to the previous one using its hash, forming a chain of blocks where each carries the fingerprint of its predecessor.

 

This chaining mechanism ensures that any attempt to modify a block would not only change its own hash but also invalidate the hashes of all subsequent blocks. This cryptographic chain reaction makes it virtually impossible to tamper with data without detection, guaranteeing the ledger’s immutability. This process is known colloquially as hardening. The more a blockchain network is used and the more participants the network has to validate new blocks as they are being added, the harder and more expensive it becomes to break or roll it back.

 

The benefits of such a ledger are numerous. It enhances security by significantly reducing the risk (by increasing the cost) of data breaches and manipulation, which is particularly valuable for sensitive information like financial records or medical data. The tamper-proof nature of the ledger also increases transparency, as anyone can participate in the network, and all participants in the network can access a complete and unedited history of transactions. This eliminates the need for intermediaries to verify data, streamlining processes and reducing costs. 

 

Furthermore, the immutable ledger provides a permanent and verifiable record of events, making it easier to conduct audits and investigations. This feature is crucial for regulatory compliance, dispute resolution, and building public trust in the record. In sectors like healthcare, finance, and supply chain management, blockchain’s immutability and resistance to censorship/attacks can revolutionize how data is stored, shared, and verified, leading to increased efficiency and trust among stakeholders for the most sensitive datasets, such as health records, financial transactions, and identity.

 

For a tangible indication of how useful such a solution could be, we point to Bitcoin, whose trillion-dollar value would not be possible without a bulletproof way of maintaining the chain of ownership of each coin. Every successful financial solution is, at its core, two things: an identity scheme and an accounting system. Our solution applies the same principles to relationships and content.

 

We suggest an approach utilizing blockchain technology, specifically multiple public blockchains, due to blockchain’s open architecture, transparency, and tendency to harden (i.e., become more difficult and expensive to hack) with scale. However, the process begins in a decidedly non-technical manner: with a genesis block based on known interpersonal relationships.

 

First, the individual creates a unique key pair according to well-established mathematical principles. They bind their identity to the key pair by co-signing their relationships with others—people who know them and can vouch for the high probability of their confirmed identity. It is important to ensure the genesis block identities are true, but this is by no means essential, as even the genesis block can, over time, be confirmed as false by enough users—in a sense, overwhelming a “bad” transaction with numerous “good” correcting transactions. By gradually building out this “network of trust,” participants anchor their identity to that of their associates. Then, they assert ownership of the content they create by signing it—applying their private key to the hash of their content and providing proof of its association with their public key. These co-signed relationships are essentially blocks in a chain of provenance they create for their authorized content. Finally, a free and publicly available browser extension allows others to verify the proof of the binding of their content to their public key. All of this is done following the principles of transparency, widely witnessed events, and decentralization that have made blockchains immutable records, but now crafted for the cause of unforgeable identity. Of course, a creator could surreptitiously claim authorship over a work not original to them, but this system would surely dissuade such malicious activity as more and more works have readily provable provenance. At a minimum, observers would begin with a rebuttable presumption of the validity and originality of a work signed by a creator and committed to a blockchain until other, more credible evidence becomes available. Over time, such a system can make identity unimpeachable, securely bind an individual’s content creation to their identity, and do so in a way that viewers of the content can easily verify its source. Additionally, the entire effort leverages conventional blockchains as the immutable ledger to backstop this system.

 

In this way, individuals can more forcibly reassert their online identity, authorize their content, and control their narrative. A natural side effect of this is to deflate the deepfake balloon—not so much by directly attacking it, but by rendering it less relevant and more costly. Namely, the more it is known that signed content is both available to anyone and readily verified, the less credibility is given to unsigned content.

 

The integrity of this system is maintained as any false records (meaning impersonations) can be successfully challenged and corrected. This will occur faster as an identity scales (i.e., is viewed more frequently), making the system immediately useful for higher-profile individuals but equally useful over the longer term for everyone. Consider the community notes efficiency of an X message that is viewed millions of times versus one viewed only several times—the former has a much higher probability of generating effective debunking or supporting evidence than the latter. Consequently, we see this system being particularly useful for building provenance of identity ownership for those who seek to scale their identity for purposes such as monetization or political and social influence. 

 

For those looking to monetize their identity (e.g., athletes, influencers, entertainers), provable provenance is a powerful tool to both protect and optimize the value of their output. Certified instances can be priced at a premium, for example, and having an immutable record should make it easier to prove actions against fraud in civil or criminal claims. However, this will not only be useful for the famous. Much like parents work to build a credit history for their children from an early age, they will also look to build identity provenance as a way to help their children emerge into adulthood in a world where the ability to impersonate has scaled exponentially and decreased in cost—making mass impersonations economically viable. For both the famous and the not-so-famous, the earlier this process starts, the more likely their validity will be confirmed. This, in turn, allows them to cryptographically assign proof to any likeness or output claiming to be them or from them. For example, in videos that contain metadata (though some services scrub it), the proof is baked into the metadata, creating identity verification that is passed along with the video. For metadata that is scrubbed, there is a ubiquitous behind-the-scenes caching system that achieves the same effect.

 

At first this might be a manual, tedious process, but over time we expect AI agents to assist. For the famous, their followers can provide leverage. Over time, your identity will be hardened just like a blockchain is. While nothing is impervious to breach, this process makes it less economically viable for bad actors to try and roll back those repeated confirmations. The Bitcoin blockchain, for example, is so hardened at this point that it would require an appreciable portion of the world’s power generation to fully overturn a significant portion of the blockchain. In similar fashion, the difficulty of forging an identity increases over time with the same spectacular growth rates the more widely one asserts the authenticity of their content. And while a noted celebrity with many millions of followers and a deep network of colleagues begins with a formidable advantage to being hardened, a less notable person within the general populace can accumulate the same hardening benefits, given sufficient time and use.

 

Even if every instance of impersonation, or real depiction, cannot be flagged as false or true—this will at least isolate virtual representations into buckets of confirmed, false, or unverified. To be clear, we are not proposing that blockchain based identity can determine the truth. The content itself might be absolutely false, manipulative or just downright dumb. What we do believe blockchain can help solve is true provenance. So, I might not know if what you say is true, but I will know that it was you who said it.

 

“Know thyself, know thy enemy.”

  • Sun Tzu

 

We begin and end with people. We envision a world where individuals choose to authorize content and allow its proof to travel with it across the Internet. A world where videos that surface without attribution are considered suspect until blockchain-verified users confirm their authenticity. A world where our friends, family, colleagues, followers, or anyone we have impacted in a personal way work together to help us all strengthen our personal identities against an onslaught of easily produced deepfakes. It may not be a world of verified truth, but it can be a world of authorized and verified identity-based content, with no need for centralized intermediaries to slow things down or increase costs. A world of widely witnessed association.

Back to Table of Contents