• Just The Metrics
  • Posts
  • This Cycle, We Are Entering the Era of 3rd Gen Smart Contract Platforms

This Cycle, We Are Entering the Era of 3rd Gen Smart Contract Platforms

The Sui Thesis

The Numbers Don’t Lie

Welcome back to Just the Metrics, your trusted source for thesis-driven insights on the crypto world. 

If you’ve been with me for a while, you know the drill: we look at performance, adoption, and the hard stats that truly matter.

Over the past few months, all signs have been pointing in the same direction: Sui.

The numbers clearly show that Sui’s underlying tech outperforms nearly everything else in the space right now.

And it’s compelling enough to make me pivot the entire focus of this newsletter.

Don’t worry; the mission stays the same: uncover the best platforms and protocols backed by real evidence. But for the foreseeable future, we’re laser-focused on Sui because the metrics indicate it’s the best tech out there.

So let’s get started 👇

This Cycle, We Are Entering the Era of 3rd Gen Smart Contract Platforms

The era of blockchains capable of scaling to the internet level begins with granular state representation, enabling precise access control and forming the foundation for these three key qualities👇

  1. Ability to scale computational resources based on demand.

  2. Local fee markets for efficient allocation of computational resources.

  3. Atomic composability, which maximizes the utility of a smart contract platform.

And now we are seeing the emergence of 3rd gen smart contract platforms that fullfill all these requirements and could very well be the launchpad for the next generation of web3 applications.

Before you get triggered, read the full article👇

Since the inception of Bitcoin, enough time has passed for this industry to figure out the architecture of a system that can accommodate global demand, allocate resources efficiently and at the same time enable systems that can interact in an atomically composable manner.

And I think this industry is finally starting to see some design choices that inevitably lead to systems that can scale with any kind of demand that will be thrown at them and at the same time enables an execution environment that preserves atomic composability.

So, What Defines a System That Can Scale With Demand?

A dynamically scalable system or a system that scales with demand is the one that can adjust its computational resources in response to changing workload demands, both increasing and decreasing resources as needed.

This is how web2 systems scale.

But when we look at blockchain-based systems, there was always this notion of maximum possible throughput. 

For Bitcoin, it’s around 10 TPS (Transactions per Second), Ethereum L1 it’s around 20 TPS, for Solana it’s around 800 TPS. 

And this metric may change depending upon the computational intensity of the transactions that need to be processed and all that, but fundamentally all these systems had an upper limit for throughput.

And that upper limit was never enough to accommodate the needs of any serious applications. 

Serious in a sense that nobody in the web2 world considers an application serious if it doesn't have at least a couple of million DAUs.

And till now, most of the blockchains would struggle if they had some thousands of people trying to interact with these systems at the same time.

This fundamental inability to scale these systems is what gave rise to the TPS wars. 

None of them could really scale to accommodate serious demand, but people will still talk about how great their favorite blockchains are based on these cooked-up metrics.

So Why Can't These Blockchains Scale to Accommodate Demand the Way Web2 Systems Scale?

The core limitations of most of the existing blockchains out there is that they are built on fundamental design choices that inherently restrict their ability to scale with demand.

Most blockchains have a fixed architecture that doesn't allow for easy scaling of individual components without affecting the entire system.

This means that unlike web2 systems, they can't automatically adjust the number of computational resources based on workload demands.

This results in either over-provisioning, which wastes resources during low demand, or under-provisioning, which leads to poor performance and high fees during high demand.

Many blockchain ecosystems also rely on broken so-called “scalability” solutions like rollups, state channels, or subnets/sidechains, which create a fragmented state space. 

This destroys the most important and most valuable feature of a blockchain’s execution environment, aka the ability to support synchronous atomic commitment.

What does it mean?

It means that these scalability solutions compromise a blockchain’s ability to ensure that all parts of a transaction are executed together as a single, indivisible unit.

Synchronous Atomic commitment guarantees that all parts of a transaction are either fully executed and permanently recorded on the ledger, or none of them are. 

By fragmenting the state space, these solutions make it challenging to maintain this all-or-nothing property across different layers or channels.

And why does it matter?

It matters because atomic commitment is crucial for ensuring reliable, deterministic, and tamper-proof execution of complex, multi-component transactions in distributed ledger systems. 

Without atomic commitment, transactions spanning multiple components can result in inconsistent states, hanging transactions, and unexpected outcomes.

Now what prevents these blockchains from scaling dynamically with demand?

The inability to build dynamically scalable blockchain systems stems from the flaws in the core design choices, especially the use of data models without proper object encapsulation.

Meaning, the data is not properly organized into distinct, self-contained units with clear ownership and access rules.

In traditional programming, data is typically encapsulated into objects, where each object has its own internal state and methods to interact with that state. This allows for better organization, modularity, and parallel processing of data.

Because most blockchains lack this property in their core architecture, it becomes difficult to separate and parallelize transactions that don't actually conflict with each other.

This also limits the ability of these blockchains to treat assets as first-class objects with clear ownership and mutable/immutable properties, making it harder to separate and parallelize operations on independent data objects/assets.

Because of this inability to separate and parallelize transactions, most of the existing blockchains require global consensus on the ordering of all transactions. This creates a massive bottleneck as the system tries to order transactions that don't need to be ordered relative to each other.

Along with that, when load is too high, fees and costs become disconnected. Users have to pay premium prices in an auction-like system, which can drive away users and create a poor user experience.

Now What Are the Core Requirements of a Blockchain to Scale with Increasing Demand?

The foundation of a blockchain that can scale with demand lies in its ability to adapt and grow seamlessly. 

This adaptability begins with a design that supports high levels of parallelism.

Meaning, the systems should have the ability to easily determine which transactions can be executed concurrently without conflicts.

But it doesn't stop there. 

Modular components that can scale independently are crucial, allowing targeted resource allocation as needs change. The system should be responsive, adjusting resources in real-time based on demand. 

This dynamic resource allocation reinforces the system's ability to handle evolving requirements.

To take scalability to the next level, the ledger must support state partitioning strategies like state sharding. 

By breaking data into smaller, manageable pieces, the burden on individual nodes is reduced. This facilitates horizontal scaling, allowing more nodes or clusters to be added as needed without compromising the ability to ensure atomic commits in the execution environment.

We're witnessing the rise of blockchains designed with these qualities in mind.

The common thread? 

They're all inspired, in some way, by Object-Centric Data Models.

Understanding Object-Centric Data Models

The object-centric data model stems from Object-Oriented Programming (OOP), a well-established concept in web2 programming.

The primary focus of OOP is to model systems as interactions between objects. Complex systems can be built by creating objects that represent real-world entities or abstract concepts and specifying how these objects interact with each other.

At its core, the object-centric approach focuses on structuring systems as collections of interacting objects, each possessing its own set of properties and behaviors.

Here, each object is a self-contained unit that holds specific data and defines how that data can be manipulated. It involves structuring data and related operations around distinct objects, each with a unique identifier. 

This approach allows for fine-grained control over data access and modification, as each object encapsulates its own state and behavior.

By organizing data into discrete, self-contained objects, the object-centric model provides a clear and intuitive way to represent and manipulate complex systems. 

It enables developers to reason about the system in terms of objects and their interactions, rather than dealing with a monolithic, tightly-coupled data structure.

And this is incredibly useful in systems like blockchains.

Let’s find out why.

Applying Object-Centric Models to Blockchain

Applying the object-centric approach to blockchain systems transforms the global state into a collection of distinct objects, each with a unique identifier. 

Unlike traditional models that maintain a single, monolithic global state, this method allows for a more granular representation, enabling fine-grained control over state access and modification.

Sui’s object-centric data model is a perfect example of this.

In Sui, each object encapsulates its own state and behavior, similar to objects in object-oriented programming. 

These objects can represent various entities such as user accounts, smart contracts, or digital assets. By assigning unique identifiers to each object, the system can efficiently reference and manipulate specific parts of the state.

(For you UTxO fanatics: Imagine this as a UTxO model that you know, but with better UTxO abstraction.)

Now why is this so important?

This granular state representation allows for more precise access control, as permissions can be defined at the object level. This means that when a transaction needs to update the state, it can target specific objects rather than affecting the entire global state.

Sui’s Object-Centric Data Model

There are two main types of objects in Sui:

  • Move Packages: Contain smart contract bytecode modules.

  • Move Objects: Instances of data structures defined in those modules.

Objects can have different types of ownership:

  • Address-owned: Owned by a specific address and only accessible by that owner.

  • Shared: Accessible to everyone on the network.

  • Immutable: Cannot be changed or deleted and is accessible to everyone.

  • Wrapped: Contained within another object.

In Sui, smart contracts / Move packages manipulate these objects directly. Each object in Sui has a unique 32-byte ID, a version number, a transaction digest showing its last modification, and an owner field indicating how it can be accessed.

When a transaction is submitted, it includes a list of the objects it will interact with, specifying whether it will read from or write to each object.

This information allows to easily separate transactions that interact with objects that have dependencies from those that don't, which provides the right foundation for parallel transaction execution.


The relationship between objects and transactions in Sui forms a directed acyclic graph (DAG).

Transactions within a DAG have explicit relationships with others that create a web-like structure. Note that eventually each transaction is observed by all nodes and, for simplicity, this image is showing that a transaction is only observed by a single node.

Transactions take objects as inputs, modify them, and produce new or updated objects as outputs. 

It distinguishes between owned objects, which can only be modified by their owner, and shared objects, which can be accessed by multiple parties. 


Sui then utilizes this DAG for transaction propagation and consensus and then, in a separate process, orders transactions into checkpoints, which are similar to blocks. 

Checkpoints are linked together and ordered in a linear fashion, but unlike typical blockchains, the transactions grouped into these checkpoints are already finalized.

Source: Kloshy

This enables Sui to track the history and dependencies between objects and transactions efficiently.

The most common example for this is how Sui has introduced a fast path for owned transactions that can skip the whole consensus process. 

On the one hand, if two transactions access different objects, they can be processed simultaneously without the risk of conflicts or inconsistencies. 

On the other hand, if two transactions attempt to modify the same object, they must be serialized to maintain data integrity.

So, this is how the Object Oriented model becomes the core enabler of Sui’s parallel execution capabilities.

Now, How Does This Model Become the Foundation of a Blockchain That Scales with Demand?

First and foremost, because the object-oriented model allows for granular state representation, it serves as a natural foundation for state sharding.

Let me explain:

State sharding refers to the division of a blockchain's global state into smaller, manageable parts that can be processed independently by individual validator nodes or clusters of validator nodes.

In the object-centric model, each piece of data is represented as a distinct object with a unique identifier, which enables easy partitioning of the global state across multiple machines.

The mental model you can use here is to think of each object as its own shard.

Since each object represents a subset of the global state, transactions involving these objects can be assigned to specific validators for a specific epoch based on their identifiers.  And because these objects are self-contained units of data, operations on one object do not necessarily affect others.

This creates a natural and deterministic way to divide the global state.

Now overlay this architecture with an execution engine that allows scaling the execution capabilities of validator nodes with demand, and you open the door for blockchain that scale with demand.

In the case of Sui, the Pilotfish protocol is the first iteration of such an execution engine.

Separation of Transaction Sequencing and Execution

Basic architecture of the Pilotfish protocol Source: https://arxiv.org/pdf/2401.16292 

The Piltofish system consists of three main components within each validator: 

  1. the Primary, 

  2. SequencingWorkers (SWs), and 

  3. ExecutionWorkers (EWs).

The Primary handles consensus and is always a single machine.

SequencingWorkers store transactions and dispatch them for execution, while ExecutionWorkers store the blockchain state and execute transactions received from the SWs. 

A typical Pilotfish deployment might use one Primary, along with multiple SWs and EWs.

So, how exactly does this system supercharge throughput?

Pilotfish achieves scalability through internal state sharding within each validator. 

This approach differs from inter-validator sharding, as all validators in Pilotfish are responsible for the entire state, with sharding occurring inside each validator across multiple machines.

In Pilotfish, each transaction is assigned to a single SW, and each blockchain object is assigned to a single EW.

And with that validators can add more EWs as needed, increasing processing capacity without major system changes.

Now let’s see how this system works.

Distributed Execution Across Multiple ExecutionWorkers

The execution process begins when a SW receives a committed sequence of transactions from the consensus layer. The SW analyzes each transaction to determine which objects it will interact with and informs the relevant ExecutionWorkers.

EWs that need to read objects wait until the transaction is ready for processing, based on the total ordering provided by the consensus layer.

For cross-shard transactions that access objects from multiple EWs, Pilotfish implements a coordination mechanism.

(**In this context, a cross-shard transaction is a transaction that accesses objects or state managed by multiple ExecutionWorkers within a validator's internal sharding system.**)


This is how it works:

The EW designated as the dedicated executor retrieves necessary data from other EWs, executes the transaction, and stores the results locally. It then informs other involved EWs of the results, allowing them to update their local stores.

A System That Scales With Demand

This architecture allows for dynamic scalability, as validators can add more ExecutionWorkers as the load increases. 

The system has already demonstrated impressive performance.

For simple transfers, the system maintains a sub-20 millisecond latency across all EW configurations, with an eight EW setup processing five to six times more transfers than a single EW at 6-7 millisecond latency. 

However, the scalability is not perfectly linear for simple transfers due to the computationally light workload. 

For compute-intensive workloads, Pilotfish achieves near-linear scalability, showing up to eight times throughput increase with eight EWs.

As of now, this is a proof of concept. 

The upcoming upgrades to this protocol will transform this proof-of-concept into a more robust system, becoming the basis of Sui’s ability to scale with increasing demand.

This includes adding support for multiple SequencingWorkers, implementing shard replication and crash-recovery, and integrating ultra-fast remote direct memory access networking. 

These improvements will improve the system's fault tolerance, load balancing, and communication speed, further boosting its scalability and performance.

So, you can now handle more traffic as demand grows, but how will you ensure the system uses resources efficiently?

How will you prevent popular objects from taking up all the network's capacity?

This is where local fee markets come in.

Better Resource Allocation and Congestion Control Using Local Fee Markets

From a first principles perspective, a global fee market simply doesn't work for internet-scale systems.

Here's why: 

It treats all resources as equally scarce, which is just not true in a distributed system

When demand spikes for one resource, a global market jacks up prices across the board.

That's inefficient and unfair.

Let's consider a real-world example: 

Imagine if your local grocery store used a global pricing system for all its products. If there's suddenly high demand for avocados, the price of everything in the store - from milk to toilet paper - would skyrocket. 

This would make no sense, as the availability of these other items hasn't changed.

Similarly, in a blockchain with a global fee market, a sudden increase in demand for one type of transaction (like trading a popular meme token or minting a popular NFT) would make all other operations more expensive, even if there's plenty of capacity for those other tasks. 

Different operations have different costs, but a global market lumps them all together. This leads to bottlenecks and congestion as popular resources get overpriced while others sit idle. 

This inflexibility makes it impossible to scale smoothly as usage grows, unlike systems that can adjust prices and resources more granularly based on actual demand for specific services or resources.

And this is why local fee markets are excellent in enabling a system that prevents hotspots in one part of the system from slowing down execution or raising fees elsewhere.

And like I have repeated multiple times here in the article, the core reason why the object-oriented model is superior to anything out there is because it allows for granular state representation, and that allows for the creation of local fee markets.

By treating each object as a distinct entity with its own utilization meter, the system can track and manage resource usage at a granular level.

And if I quote @blackdog here:

'Sui's local fee market design leverages the fine-grained static information available in Sui's object-centric data model to prevent congestion upfront rather than learning about it the hard way during execution.'

Which, in turn, prevents hotspots in one part of the system from slowing down execution or raising fees elsewhere.

 Now, with the object-oriented model, you have a system where computational resources can be added based on demand. It also enables precise, targeted management of network capacity, ensuring a more equitable experience for all users. 

This, in turn, results in an overall improved user experience. 

But that's not all.

It ultimately boils down to a smart contract platform's ability (like 99% of the L1s, L2s) to maximize the utility and value generation of its applications.

And this is where atomic composability comes in.

The Power of Atomic Composability

Interesting meme, right? 

The irony in Vitalik’s statement about ‘Atomic Composability’ becomes clear once you understand this. 

Ethereum’s first-mover advantage as a smart contract platform, and the growth of its massive DeFi ecosystem, came from the Ethereum Virtual Machine (EVM). 

What made the EVM special is that it provided an execution environment that enables high-level expressiveness through language like Solidity and supports atomic composability.

Here, expressiveness refers to the flexibility and capability of a platform's programming environment, enabling developers to create sophisticated and versatile smart contracts that support complex operations and use cases.

And atomic composability means the ability of multiple smart contracts or transactions to interact with each other in a single, indivisible operation, ensuring that either all of the operations succeed or none of them do.

In simpler terms, it's a guarantee that when you execute multiple actions together, they either all happen successfully, or if one fails, everything is rolled back as if nothing happened. 

This high level of expressiveness and composability that EVM allowed made it a potent force that practically invented DeFi and made Ethereum a huge ecosystem of DeFi applications.

But there is a twist here: 

even though this high level of expressiveness and composability has been instrumental to the success of Ethereum, it also caused a lot of problems.

You see, with Ethereum this industry went from a system with zero or minimal expressiveness like Bitcoin to a system with maximum expressiveness like Ethereum.

The expressiveness allowed by the EVM introduced a lot of security risks and the composability of contracts allowed vulnerabilities to propagate across interconnected systems

This is what caused and still causes the incredible high number of hacks in the EVM ecosystem. Billions of dollars are lost every year to hacks.

(Chainalysis)

And the irony of Vitalik’s statement lies in the fact that Ethereum is now pushing a L2 rollup-centric approach that creates a fragmented state space. 

Rollups are ultimately mini blockchains with their own state space, and that restricts the amount of state any program in the Ethereum ecosystem can access atomically. Saying that atomic composability is valuable would be contradictory to the scalability path Ethereum is taking now.

So I see this as just Vitalik doubling down on the bad scaling choice they made.

On the other hand, on Solana, you don’t see many DeFi applications that leverage composability because it’s hard to implement such a system on SVM. Devs chew glass to code smart contracts on Solana. It’s not a dev-friendly platform.

This is what ultimately made DeFi restricted to a space with very high development costs (audits & developer costs) and the user base was restricted to a very small bunch of people which has the risk tolerance to interact with such a system.

To change this trend, what you need is a system that allows maximum expressiveness and composability, while at the same time provides security guardrails to define the behavior of an asset at the granular level and is also dev-friendly.

And this is exactly what Sui’s object-oriented model provides. 

The object model enables granular state representation and allows for more precise access control, as permissions can be defined at the object level.

This way each object in Sui can have specific ownership and access rules, which acts as security guard rails at the object level. At the same time the Move programming language offers powerful tools for expressing complex logic and creating composable assets.

Features like Programmable Transaction Blocks (PTBs) in Sui further boost this capability by allowing developers to combine up to 1,024 individual transactions into a single atomic operation.

This combined with  the ability to scale with demand and allocate computational resources efficiently, you create a system that allows developers to fully leverage the power of atomic composability on unlimited shared state without high latencies, high fees, security compromises, or scaling limitations.

This creates a powerful mix of features that could drive the next wave of innovation in Web3. 

That’s why, if you zoom out and view it from the evolutionary perspective of smart contract platforms, you can clearly differentiate these platforms into generations.

Gen 1: Ethereum

Ethereum is a gen 1 smart contract platform that features an execution environment with a high level of expressiveness and atomic composability, but without proper security guardrails. This is a system that also has a global fee market and no ability to scale its computational resources with demand.

Gen 2: Solana

Solana is a gen 2 smart contract platform that improved upon the existing model with features like parallel processing capabilities and local fee markets.

But even with the ‘firedancer cient’ that is set to boost the throughput of Solana, it’s still a system that can’t dynamically adjust its computational resources based on demand.

Like the way you saw more than 70% of Solana transactions failing at the height of the meme coin frenzy, you will see this system struggling to cope with demand when a similar event happens next time, which goes beyond the static scaling capacities of Solana.

Gen 3: Sui

Gen 3 is where we enter the age of blockchain systems that are designed to support apps that can scale to internet levels.

Sui has all the qualities that are described as a Gen 3 smart contract platform.

  1. Ability to scale computational resources based on demand.

  2. Local fee markets for efficient allocation of computational resources.

  3. Atomic composability, which maximizes the utility of a smart contract platform.

Not only that this is a platform that allows maximum expressiveness and composability, while at the same time provides security guardrails to define the behavior of an asset at the granular level and is also dev-friendly.

And if you look it from an evolutionary perspective this is how it looks👇

I believe this is not the end, but it's just the start. 

There is more to come, building upon the object-oriented model that Sui has now pioneered. Maybe that would be a model that also includes the ability to also decentralize its validator sets as the system scales with demand.

Many people, including myself, have said things like "zk is the endgame" or "FHE is the endgame." These platform-agnostic technologies might or might not be the ultimate future of Web3, but I believe the true endgame begins with a blockchain that enables granular state representation and more precise access control of its state. Such a blockchain, in turn, allows for fine-grained control over the behavior of digital assets and smart contracts, as well as the interactions between all these components.

As everything goes digital, fine-grained control of the behavior of digital things becomes key, and the object-oriented model provides a viable way to implement that. And technologies like zero knowledge proofs (ZKPs) or Fully Homomorphic Encryption (FHE) are going to be leveraged by those platforms as a means to an end, and not the other way around.

This is why I believe Sui, as an infrastructure layer, will be a key indicator of the applications that will shape the next phase of Web3.

Btw, this is not a paid post or financial advice. It’s just my opinion based on my observations of this space over the past 6 years. If you find it helpful, feel free to share it, and if you are triggered by it, you’re welcome to do the same.








Reply

or to participate.