🚀 The Scalability Problem is solved

The Linear Scalability Thesis

Written by Sooraj | Reviewed by Matthew Hine & Avaunt

Bitcoin was a technological breakthrough that has fundamentally changed the nature of financial transactions.

Bitcoin enabled transactions to be verified and recorded in a way that does not rely on trust between individuals or centralized authorities like banks. By rendering the need for trusted intermediaries obsolete, Bitcoin embodies a world where absolute certainty is attainable through objective, mathematical verification rather than subjective human trust.

This was a paradigm shift from traditional financial systems underpinned by centralized authorities to a peer-to-peer system without a central governing body.

Ethereum's major technological leap was creating a trustless, decentralized platform for building applications and coordinating human activity. The Ethereum Virtual Machine (EVM) established a "world computer," which provides a single, shared environment where all smart contracts and applications coexist and can freely communicate with each other.

Any application built on the EVM has access to all the data and functionality of other applications on the network. Smart contracts on Ethereum enabled trustless interactions, placing trust in code and cryptography rather than intermediaries. This fundamentally changed how we thought about society's structures, agreements, and relationships.

And what seems to be the biggest challenge in this space right now is how to scale that "world computer" without breaking the composability between the applications in that system.

Let me explain.

The Quest for a Scalable World Computer

Source: Ethereum's CTO Dr. Gavin Wood presents "Ethereum for Dummies" / https://archive.devcon.org/archive/watch/1/ethereum-for-dummies/?playlist=Popular&tab=YouTube

When Ethereum pioneered the "world computer" vision, allowing anyone to build applications without permission while maintaining atomic composability, it was a fantastic success. We saw different apps popping up and enabling new use cases.

Such flexibility, often called "composable money legos," allows developers to treat various protocols as building blocks that can be assembled in creative ways to unlock new use cases. For example, a DeFi lending protocol can easily integrate with a decentralized exchange to allow users to seamlessly trade their borrowed assets.

But the problem always was how many transactions this "world computer" could process at a given time.

From platforms like Solana to Sui, the goal is to create a distributed ledger system that supports the world computer and can process more transactions.

But the problem always was that there was always a hard cap on the throughput of this world computer.

Now the question is if we can have a world computer that allows atomic composability between those applications, allows developers to code secure applications, and scales according to demand. This means the system should process billions of transactions a second if needed, and be a linearly scalable system that scales with demand.

Basically, this is the Radix thesis for scalability.

The overarching philosophy of Radix's approach to scalability is that for Web3 applications to achieve mainstream adoption and replace traditional finance, the underlying distributed ledger platform needs to handle increasing transaction volumes and user demand without hitting bottlenecks or performance degradation, and also without compromising on decentralization, security, or atomic composability between applications.

In this write-up, we are going into the technical fundamentals of the approach Radix is taking to enable linear scalability.

So let’s go.

What is Linear Scalability?

Linear scalability is a fundamental requirement  for realizing the vision of a truly decentralized and globally accessible "world computer" that can support complex applications and agreements at a global scale.

The core idea behind linear scalability is the ability to increase a system's throughput proportionally as more resources (nodes, computing power, etc.) are added to the network.

This allows the system to scale seamlessly without running into bottlenecks or centralized points of control and also retaining the atomic composability between the applications in the network.

Rollup-Centric Scalability Roadmap: An Antithesis to the World Computer Vision

The problem with Ethereum's rollup-based scalability approach is that it essentially creates multiple separate execution environments or "rollup chains" rather than scaling the base layer itself.

This fragments the Ethereum network into multiple siloed "rollup chains" rather than scaling the base layer itself. Users and developers must navigate a complex landscape of different rollups, each with its own liquidity, security models, and tooling.

This goes against Ethereum's original vision of a unified, globally composable "world computer."

While blockchains like Solana and Sui aim to scale Ethereum's initial world computer concept, their approaches ultimately face inherent limitations that cap their throughput capacity.

Solana, for example, uses a unique Proof of History consensus mechanism and parallel transaction processing with its Sealevel runtime to achieve high throughput. However, its performance is still limited by the hardware of individual validator nodes that the system can accommodate and it's network is not designed to linearly scale the number validator nodes

As the network grows and transaction volumes increase, Solana may eventually reach a scalability limit, as seen during the meme season, which resulted in congestion on Solana.

Similarly, Sui introduces innovative features like its object-centric data model and parallel execution engine. These optimizations enable impressive scalability gains compared to Ethereum. However, like Solana, Sui's throughput is ultimately bounded by the vertical scaling limits of its validator hardware.

In contrast, Radix's approach aims to achieve true linear scalability at the base layer itself, without any theoretical limits. The key innovation powering this is Radix's Cerberus consensus protocol, which leverages a unique pre-sharded ledger architecture and its tight integration with the Radix Engine application layer.

So let’s explore the 3 core components of the Radix distributed ledger platform, and how these three components together bring the vision of a linearly scalable world computer!

The Radix Engine Execution Layer

Radix redefines the concept of an execution layer by integrating concepts that are tried, tested, and highly successful in the web2 space, differing significantly from systems like Ethereum or Solana.

Much like game engines such as Unreal Engine revolutionized game development, the core philosophy of the Radix Engine centers around streamlining development processes.

Game engines provide a unified framework that handles graphics, physics, and other low-level functions, allowing developers to focus on game design and storytelling.

Similarly, the Radix Engine aims to simplify the development of web3 applications by offering a comprehensive toolkit that manages the intricate details of asset management and transactions. This abstraction allows developers to concentrate on creating innovative financial applications without getting bogged down by the underlying complexities of blockchain technology.

Asset-Oriented Design: A Core Principle

One of the key aspects of this streamlined approach is the asset-oriented design philosophy that guides the development of the Radix Engine.

This philosophy aims to provide a more intuitive, secure, and efficient approach to building web3 applications and managing digital assets. The Radix Engine treats digital assets, such as tokens and NFTs, as first-class citizens within the platform. Assets are not just balances in smart contracts but are perceived and treated as tangible objects that can be held, transferred, and controlled directly by users. This native asset handling simplifies asset management and reduces the risk of smart contract vulnerabilities.

This design philosophy represents a fundamental shift from the account-based, message-passing model used by platforms like Ethereum. It recognizes that assets are the core of most apps and provides a purpose-built environment that prioritizes their secure and efficient management.

Finite State Machine (FSM)

The Radix Engine utilizes the concept of a finite state machine (FSM) to manage assets. An FSM ensures that an asset can only change states in predefined ways, offering several benefits.

FSM guarantees that assets only transition between valid states, preventing errors such as transferring assets without proper ownership or duplicating assets without following specific rules. The FSM model mimics how physical assets behave; for instance, cash cannot multiply on its own, and a ticket can only be used by its holder.

This alignment with real-world logic makes digital asset management more intuitive for developers.

Developer-Friendly Tools

The Radix Engine offers Scrypto, a smart contract language based on Rust, which is known for its safety and performance.

Scrypto extends Rust with native asset management features, providing built-in functions for common asset operations such as minting (creating new assets), burning (destroying assets), and transferring assets. These built-in functions reduce the amount of code developers need to write and test.

By abstracting complex asset management tasks, Scrypto allows developers to focus on building unique features of their applications instead of reinventing the wheel.

Security and Reliability

In 2022 alone, over $2.598 billion worth of crypto was stolen via DeFi breaches. In 2023, funds stolen from DeFi protocols were around $1 billion. In Q1 2024, DeFi platforms lost more than $336 million in digital funds.

In a world where we lose billions of dollars in DeFi hacks, ensuring security is of paramount importance if web3 applications are to gain real-world adoption.

The asset-oriented approach and use of FSMs inherently prevent many common smart contract vulnerabilities. By enforcing strict rules and state transitions for assets at the protocol level, the Radix Engine eliminates entire classes of bugs and exploits that plague other smart contract platforms.

Developers and users can trust that assets will behave as expected, which is crucial for financial applications where security and reliability are paramount.

By prioritizing security and reliability from the ground up, the Radix Engine aims to provide a robust and secure foundation for applications, particularly in the web3 space, where security vulnerabilities can have severe financial consequences.

But having a world-class execution engine like the Radix Engine is only one part of the puzzle.

It alone is not enough to enable truly global-scale web3 applications. The execution environment must be able to handle the demands of millions and eventually billions of users without compromising on performance, security, or decentralization. 

The integration of the Radix Engine with Cerberus, Radix's pre-sharded consensus environment, addresses this requirement.

Pre-Sharding: The Foundation of A Linearly Scalable Network

Radix's approach to sharding differs significantly from most other platforms. While many projects opt for dynamic sharding, where new shards are added incrementally as network demand increases, Radix has chosen the path of pre-sharding.

This design decision is rooted in the belief that for a linearly scalable network, it's more efficient to manage the movement of validators than to constantly redistribute state across new shards.

So let's dive deeper into this concept of a massive Pre-Sharded Ledger with deterministic shard derivation:

With Cerberus, Radix starts with a fixed shard space of 2^256 shards from day one. This means the network is divided into an incredibly large number of shards before any transactions even take place. 

Each shard contains a sub-state, which is a discrete unit of data. Substates can represent anything from a single token to an entire dApp and its users' tokens.

In contrast, most other sharded blockchains start with a single shard and add more shards incrementally as the network grows.

Each shard in Radix's ledger is deterministically derived based on a key reference field, such as a public key. This means that given an input (e.g., a public key), the corresponding shard can always be located efficiently.

As transactions occur, data is added to the appropriate shards based on the deterministic mapping. With a known location ID for each shard, it's simple to route transactions for processing. Nodes then only need to handle data for the shards they manage, not the entire ledger.

This setup makes it easy and efficient to locate the shard responsible for any transaction. Related data stays grouped, avoiding costly data reorganization as the network grows. This method also speeds up lookups and reduces cross-shard communication. In contrast, dynamic sharding requires constant network monitoring and decisions about when to add new shards, which is more complex.

This pre-sharded ledger with its 18.4 quintillion shards has the theoretical capacity to process millions of transactions per second and store vast amounts of data. As the network grows and more nodes join, additional shards can be allocated to these nodes, allowing the network's throughput to increase linearly.

While having a world-class execution environment and a pre-sharded ledger capable of accommodating global-scale demands is crucial, the real challenge lies in coordinating transactions and maintaining consistency within and across the shards.

This is where Radix's unique consensus algorithm, Cerberus, becomes essential.

Cerberus: Enabling Efficient Cross-Shard  Consensus

The Cerberus protocol is a cross-shard consensus protocol designed to enable fast and secure cross-shard communication in Radix's massively sharded environment.

The operation of Cerberus hinges on two fundamental components:

  • Local consensus within each shard and

  • Cross-shard communication

Within a shard, validators must reach an agreement on which transactions to process and in what order. This local consensus is crucial for ensuring that all validators within a shard are synchronized and agree on transaction details.

Once this agreement is reached, the cross-shard communication component of Cerberus comes into play. This component allows shards to communicate with each other, coordinate transactions, and maintain consistency across the entire network.

Serverless Mechanism for Cross-Shard Communication

To facilitate these processes, Cerberus uses a serverless mechanism that enables seamless communication between shards. This serverless design means that there is no need for a central server to manage the communication, which enhances the protocol's scalability and robustness.

The consensus within each shard and across shards happens simultaneously in a single 3-phase process. During this process, the validators across all involved shards collectively agree on the correctness and order of a transaction, consistent with the state of their shards. If there is a problem in any shard, the entire transaction fails across all included shards. This approach maintains the integrity and consistency of the shard's state and ensures that transactions involving multiple shards are processed correctly.

This mechanism involves exchanging states, producing quorum certificates, and validating these certificates to ensure that transactions are completed successfully across all involved shards.

Braiding Mechanism for Cross-Shard Consensus

One of the key innovations of Cerberus is its "braiding" mechanism, which conducts consensus across only the necessary shards for each transaction. When a transaction involves multiple shards, Cerberus dynamically establishes a "braid" that connects the relevant shard sets.

This braid allows the transaction to be processed atomically across the involved shards, ensuring consistency and preventing double-spends.

Cerberus employs a three-phase commit (3PC) process within each shard to establish agreement among the shard's validators.

The braiding mechanism intertwines these 3PC processes across the relevant shards, creating a unified, atomic commit. This ensures that the transaction is either committed or aborted in its entirety across all involved shards.

The braiding of 3PC processes across shards results in an "emergent" multi-shard consensus. This emergent consensus is just as secure and atomic as a single-shard consensus but spans multiple shards. It allows complex, cross-shard transactions to be executed and committed atomically without compromising composability.

By focusing only on the necessary shards, the braiding mechanism enhances the efficiency and scalability of the protocol. This approach contrasts with traditional blockchains that rely on global transaction ordering, which can be inefficient and slow.

Partial Ordering of Transactions

Cerberus also introduces a novel approach to ordering transactions. Instead of relying on a global transaction order, Cerberus allows each transaction to specify its relevant shards so that ordering only needs to occur between transactions when they interact with the same shard – this is called partial ordering.

This enables parallel processing of unrelated transactions across different shards, significantly boosting throughput. The braiding mechanism ensures that related transactions are processed atomically and consistently, maintaining the integrity of the network. But importantly it means that because most transactions aren’t related to each other, more nodes can be added to process more transactions in parallel more quickly. This parallelization is key to Cerberus’ linear scalability

Shard-Level Byzantine Fault Tolerance (BFT)

Within each shard, Cerberus employs a Byzantine Fault Tolerance (BFT) style consensus to establish agreement among the shard's validators. BFT ensures that the shard remains secure and consistent, even in the presence of malicious actors.

By combining shard-level BFT with cross-shard braiding, Cerberus can scale securely & linearly while maintaining high performance. This combination of local and global consensus processes ensures that the network can handle a large number of transactions efficiently and securely.

Linear Scalability & Progressive Decentralization

With fully sharded Cerberus, the network will scale linearly according to demand by progressively increasing the number of validators and assigning them to shards as needed. 

Initially, the Babylon network operates with a small set of validators, fixed at 100, and handles around 50 TPS. This phase uses a "single shard" form of Cerberus. As the network approaches its capacity, it will transition to the Xi'an phase, a major protocol update that will implement the fully sharded form of Cerberus. At this point, the validator set will no longer be fixed, and the network's scalability will greatly increase with massive parallelism.

In this phase, the number of validators can increase beyond the initial set, enhancing the network’s capacity and decentralization. As network load increases, the validator set can expand across multiple shards to manage the demand. For example, when the load becomes too high for a single group, it can split into two groups of 250 validators each. Further splits occur as more validators join, such as into three groups of 300 validators each.

Validators also rotate between different shard sets at the end of each epoch in a random fashion. This prevents collusion and control over specific shards.

This process of adding validators and assigning them to shards continues progressively, ensuring a gradual and scalable increase in both capacity and decentralization. Thus, the network scales linearly by adding more validators and distributing them across shards as needed.

Peer Review and Formal Verification 

The Cerberus consensus protocol has undergone rigorous peer review and formal verification to ensure its security, correctness, and performance.

The academic paper titled "Cerberus: Minimalistic Multi-shard Byzantine-resilient Transaction Processing" was accepted for publication in the Journal of Systems Research (JSys) in June 2023. This peer review process involved independent experts verifying the proof, theory, and soundness of the protocol, comparing it to other state-of-the-art multi-shard consensus protocols like Chainspace, AHL, Sharper, and RingBFT.

Additionally, Cerberus has undergone formal verification to mathematically prove its correctness and security properties, ensuring it is free from logical errors and vulnerabilities, providing a high level of assurance in the design and implementation of Cerberus.

Integration with the Radix Engine

When deployed Cerberus will be tightly integrated with the Radix Engine, which translates application-level actions into discrete "substates" or transactions.

The Radix Engine specifies the relevant shards for each transaction based on the resources and components involved. This integration allows Cerberus to efficiently route transactions to the appropriate shards and establish braids as needed.

Developers do not need to manually manage shard interactions, as the Radix Engine and Cerberus handle these complexities automatically. This tight integration simplifies the development process and ensures that the network operates efficiently and securely.

The implementation of the Cerberus consensus protocol on the Radix mainnet will happen with the upcoming Xi'an upgrade. As of now, this is being tested in the Cassandra Test Network.

The core focus of the Cassandra Test Network is to research, test, and demonstrate various implementations of the Cerberus consensus protocol before it is fully implemented in the Radix Xi'an mainnet upgrade.

Building The Rocket Around a Rocket Engine

The primary purpose of the Cassandra Network is to develop and test concepts for the components needed to build a linearly scalable and decentralized network.

While Cerberus provides an excellent consensus mechanism for cross-shard communication, ensuring linear scalability, there are other essential pieces required to build a complete, functional network.

Cassandra is essentially about "building the rocket around Cerberus," ensuring all necessary components work seamlessly together. It serves as a testbed to identify and solve challenges related to using Cerberus in a live, decentralized test network that will provide valuable data for the final product implementation of Cerberus for the Xi’an update to the Radix Network.

Components and Testing

While Cerberus is important, it is just one part of the system. The Cassandra network focuses on testing concepts and techniques for the necessary infrastructure and other technologies around Cerberus to create a fully functional, scalable blockchain network.

One key area being explored in Cassandra is the development of a new local consensus mechanism that ensures liveness and complements Cerberus' cross-shard safety. This is crucial for permissionless networks and helps maintain the network's resilience.

Cassandra is used to research and optimize various aspects of the network, such as reducing authentication complexity, minimizing cross-shard data transfer, lowering bandwidth and CPU requirements, and improving decentralization. The goal is to enable the network to run efficiently on regular hardware and low-bandwidth connections.

The Cassandra network recently achieved a significant milestone, performing a record-breaking 100,000 swaps per second and 80,000 transfers per second, totaling around 180,000 transactions per second. This performance was sustained over 100,000,000 transactions in total, demonstrating the network's impressive efficiency and scalability.

There is an ongoing effort to gather more compute resources for testing the Cassandra network. Participation can help achieve further performance improvements. Interested parties are encouraged to fill in the registration form and join the Cassie Telegram group.

Cassandra's research and findings will provide valuable input to the development of the Xi'an network update, ensuring that the Radix Network will be able to scale linearly while maintaining liveness, safety, and decentralization. The network's experiments provide solutions to potential problems, making the transition smoother for the production version.

The Biggest Challenge

Radix's approach to building its distributed ledger system demonstrates a clear understanding of the key challenges hindering widespread adoption in the blockchain industry. The Radix team has taken a fresh and straightforward approach to solving these issues, focusing on creating a solid foundation with the Radix Engine before tackling scalability.

Looking back at Ethereum, the main reason it had to pursue a scalability roadmap was the overwhelming success of its core offering—a decentralized execution environment that enabled full composability between applications. EVM unlocked a world of possibilities for developers, leading to the creation of numerous decentralized applications (dApps) and the explosive growth of the DeFi ecosystem.

However, this success also exposed Ethereum's limitations. As demand surged, the network became congested, leading to high gas fees and slow transaction times. Users found themselves paying hundreds of dollars for simple transactions, highlighting the urgent need for scalability solutions.

By focusing on creating a seamless and intuitive developer experience, Radix is laying the groundwork for attracting developers and fostering innovation.

The key question now is whether the Radix Engine will be able to capture the attention and traction of users and developers. If it succeeds in offering a compelling and valuable proposition, the demand for its capabilities will naturally grow, just as it did with Ethereum's smart contract platform.

Should the Radix Engine prove successful and gain significant adoption, the need for scalability will become apparent. This is where Radix's linear scalability system, designed to handle increasing demand by dynamically scaling the network's capacity without compromising security or composability, will come into play.

However, if the Radix Engine fails to gain traction and attract users, the need for such a scalability solution may not arise. The success of Radix's core offering will determine whether its scalability infrastructure will be put to the test.

Do you like our Thesis Approach to understanding crypto and blockchain tech?

Then Subscribe to our Newsletter and we'll deliver a Thesis every week straight to your inbox:

Guaranteed spam-free:)

DISCLAIMER: None of this is financial advice. This newsletter is strictly educational and is not investment advice or a solicitation to buy or sell assets or make financial decisions. Please be careful and do your own research.

Reply

or to participate.