Ouroboros Leios Achieves Near-optimal Throughput

Ouroboros Leios is poised to achieve near-optimal throughput, engineered to accommodate an exceptionally high volume of transactions, approaching the network’s uppermost capacity within existing constraints. In essence, Cardano will be capable of processing large amounts of transactions nearly reaching the theoretical limit. We will elucidate the methods to accomplish this feat. Additionally, the article will delve into Ouroboros Peras, which facilitates the rapid settlement of transactions.

Understanding Network Capacity and Settlement Time

Every blockchain network has a limit to how many transactions it can process. This limit, known as capacity, is the system’s maximum ability to manage workload or perform tasks under perfect conditions. Capacity is influenced by factors such as the network’s bandwidth, the computational power available, and the design of the system itself.

Throughput, on the other hand, is the actual number of transactions the system processes in a certain amount of time. It’s a real-world measure of performance, showing how many transactions are confirmed and recorded on the blockchain, often expressed as transactions per second (TPS).

While capacity is what the system could achieve in theory, throughput is what it actually achieves in practice.

The theoretical maximum refers to the highest level of activity that a system can handle under perfect conditions. It’s the limit determined by how the system is built, the resources it has, and any limitations it might face. Network congestion usually happens when blocks are produced too slowly or when the rules for verifying transactions aren’t efficient.

Throughput is the actual number of transactions the system processes successfully. It’s important to balance this with the time it takes to finalize transactions to keep the blockchain running smoothly and reliably.

Usually, there’s a trade-off between throughput and settlement time. High throughput typically means transactions are confirmed faster. But if throughput is low, it can take longer for transactions to be finalized.

As more transactions are sent through the network, it can start to slow down, particularly when the number of transactions approaches or surpasses the network’s maximum capacity. This can lead to increased confirmation latency, which is the delay between when a transaction is submitted and when it’s recorded on the blockchain.

If too many transactions flood the system, it can’t process them all quickly enough, leading to a backlog. Transactions might end up waiting in a queue or need to be sent again. On the other hand, if the network isn’t being used to its full potential, it means there’s unused capacity that could be handling more activity.

The maximum throughput a blockchain network can achieve is affected by the requirement to transmit various types of data. This includes not just individual transactions but also blocks, which are collections of several transactions, and in some cases, votes. Transmitting blocks could lead to the same transaction data being sent multiple times across the network. Additionally, certain networks may handle a significant volume of vote transactions.

Since the network’s capacity to transfer data is limited, it must distribute its available bandwidth to accommodate everything it needs to send—this means transactions, blocks, and any additional data such as votes must all share the network’s bandwidth. Throughput must be managed to ensure that all these elements are transmitted efficiently and timely within the network’s data capacity limits.

Settlement time refers to how long it takes for a block (and included transactions) to be considered final and irreversible. The Nakamoto consensus, used by Bitcoin and similar systems, offers a form of probabilistic finality. This means that the permanence of a block’s data is only confirmed after several subsequent blocks have been added to the blockchain.

In practice, a block (let’s call it Block N) is generally accepted as settled once a certain number of additional blocks have been appended after it. For Bitcoin transactions, Block N is typically seen as settled once Block N+2 is added to the chain, indicating two confirmations. However, the required number of confirmations may be higher for networks like Cardano.

The figure illustrates that Block N is considered settled after the addition of Block N+2. It’s important to note that blockchain forks can happen, representing a split in the chain. These forks are a normal part of the blockchain’s operation (mainly for the Nakamoto consensus) and are usually resolved over time as one branch becomes longer and is accepted as the correct version by the network.

Byzantine Fault Tolerant (BFT) consensus mechanisms typically involve a voting process to validate new blocks, which leads to a much shorter settlement time. A block may be deemed settled at the moment it is added to the blockchain, or soon thereafter—such as when the subsequent block is appended. For instance, Ethereum is known for its rapid settlement times, where blocks are quickly finalized and considered part of the permanent ledger.

The Nakamoto consensus prioritizes liveness—the ability to keep the system operational—sometimes at the expense of correctness, which may lead to temporary inconsistencies in the ledger. On the other hand, BFT consensus models emphasize correctness, ensuring that only valid transactions are confirmed, even if it results in brief pauses in the consensus process.

Different blockchain networks exhibit varying levels of throughput and settlement times. The objective for developers is to maximize throughput while minimizing settlement time, striking a balance that maintains blockchain’s essential characteristics like decentralization, permissionless access, resilience, and inclusiveness.

Within a network, two nodes are connected by a communication channel. The time it takes to send information from Node A to Node B is referred to as Delay Δ.

A distributed network consists of many interconnected nodes. Data, be it a transaction or a block, is usually transmitted from one node and then propagated across the entire network to reach all other nodes. This data travels through various intermediate nodes, known as hops. Nodes that are geographically closer, such as those within the same country, experience shorter delays compared to nodes located on different continents.

The Total Delay is calculated as the cumulative sum of individual delays encountered along the path from the originating node (data producer) to the most distant node (data consumer) in the network.

In a distributed network, data is generated at the same time across various nodes. For instance, users continuously send transactions into the network via different nodes, and these transactions are propagated at the same time.

Provided that all connections between nodes have enough bandwidth, every transaction and block will be delivered to all nodes within a timeframe that matches the Total Delay.

What Typically Prevents the Maximum Capacity from Being Used?

For a blockchain network to function effectively, it’s essential not only to disseminate data throughout the network but also to achieve consensus among nodes regarding the data. Consensus is the process by which a node quickly determines whether the majority of the network’s nodes (to be more precise, the resource like hash rate or stake) concur with the information being shared. It’s crucial to consider how network capacity is utilized to facilitate this consensus.

In typical blockchain networks, the role of block producer is rotated among nodes, often selected through a random process.

When a new block is submitted to the network, nodes must decide on a course of action. With Byzantine Fault Tolerant (BFT) consensus mechanisms, nodes cast explicit votes on new blocks, enabling swift consensus and, consequently, rapid transaction settlement. However, this voting process uses up network capacity, which is a trade-off for enhancing user experience.
The frequency at which blocks can be produced is constrained by the Total Delay (the sum of all transmission times between nodes).

It’s imperative that every node in the network has access to the new block to maintain a consistent global state. If nodes have differing views of the blockchain, it could result in frequent forks—multiple potential continuations of the blockchain—which is problematic for consensus models that prioritize ongoing operation (liveness) and entirely unacceptable for those that prioritize correctness (accuracy and fast agreement).

Total Delay is essentially the shortest possible time for setting the block interval, which is the rate at which blocks are produced. However, it’s important to consider a buffer to account for occasional network issues or attacks, such as bursts of intentionally delayed transactions and message replays. This safety margin for block intervals is a key factor that complicates increasing network throughput. Setting the block interval too close to the total delay could compromise the network’s stability and security.

The size of a block directly influences the total delay. Larger blocks, which can contain more transactions, take longer to propagate through the network. Moreover, as the number of nodes in the network grows, so does the number of hops data must take, thereby increasing the total delay. Reducing the number of nodes to improve throughput would conflict with the principles of decentralization and open participation.

Optimizing bandwidth and resources to boost network throughput is challenging. Bitcoin, for example, is known for its inefficiency, generating blocks of 1 to 4 MB (including SegWit data) every 10 minutes, leaving bandwidth and computational resources underutilized most of the time—this refers to transaction validation, not mining. This inefficiency is not unique to Bitcoin; it’s a common issue across blockchain networks.

In BFT consensus mechanisms, network capacity is used for the voting process, which contributes to quicker settlement times. However, this doesn’t necessarily result in significantly higher throughput, as bandwidth and resources may still not be used to their fullest potential.

Ouroboros Leios

Ouroboros Leios aims to address the challenge of optimizing capacity in a blockchain network. It focuses on achieving near-optimal throughput while considering both data throughput and communication dependencies. By balancing computational resources, bandwidth, and consensus efficiency, Leios strives to substantially increase the system’s overall throughput while maintaining security properties.

Payload decoupling is a key innovation that maximizes the use of network bandwidth and computational resources, leading to optimal capacity utilization.

Ouroboros Leios, also called Input Endorsers, decouples the diffusion of transactions and computation from the ordering of transactions in the top layer blockchain.

Leios defines three block types: Ranking Blocks (RB), Endorsement Blocks (EB), and Input Blocks (IB). The transactions themselves are contained within the Input Blocks. These Input Blocks act as the transaction carriers. In contrast, Ranking Blocks only contain references to these transactions. The blockchain will consist solely of Ranking Blocks, meaning it will only include references to the transaction data.

While the Input Blocks carrying the actual transaction data can be quite large (for example, a 1MB block), the references to these blocks within the Ranking Blocks are much smaller, often just the size of a hash—a short string of characters. This means that Ranking Blocks can refer to a large number of Input Blocks, representing a vast quantity of transactions.

Please note that the image illustrates the simplified structure of this system, excluding the Endorsement Blocks for clarity.

The structure of the various block types in Ouroboros Leios is organized as follows: Input Blocks, which contain the actual transaction data, are referenced by Endorsement Blocks. In turn, Ranking Blocks reference the Endorsement Blocks. The specific role and significance of Endorsement Blocks within the structure will be detailed at a later stage.

The Nakamoto consensus can be applied to the ordering of Ranking Blocks in a manner akin to its current use in Cardano. The ordering process for Ranking Blocks can maintain the same security principles as those currently in place.

Input Blocks can be assembled independently from the blockchain, implying that a substantial number of Input Blocks can be theoretically prepared, with the only limitations being bandwidth and computational power. It is feasible to fine-tune the minting rate of Input Blocks.

Consequently, the consensus process for Ranking Blocks will not constrain resource utilization, since it is separate from it.

Decoupling of processes is the central idea behind Ouroboros Leios.

Input Blocks can be minted much more frequently than Ranking Blocks. For instance, a new Input Block could be minted every second, while a new Ranking Block might be minted every 15 seconds. The benefit of this approach is that Input Blocks can be processed simultaneously by multiple nodes.

Nodes within the network can independently mint block candidates for the blockchain without needing to coordinate with each other.

Decentralized computation refers to the protocol’s capability to allow different nodes to perform computations. The outcomes of these computations can be shared reliably across the network, meaning there’s no need for nodes to repeat the same computations. This process will utilize a method known as stake-based endorsing.

This strategy arises from the realization that computations in traditional blockchain networks are carried out inefficiently. To illustrate, consider transactions, although the same principle applies to the more computationally intensive execution of smart contracts or scripts.

When a block producer node mints a new block and publishes it into the network, all nodes validate every transaction. Consequently, every full node ends up performing identical validations. As the number of full nodes grows, the cumulative computing resources used also increase proportionally.

Repeating these computations is a wasteful use of network resources.

Stake-based endorsing enables a select group of randomly chosen nodes to process information, verify it, and then endorse it by providing a signature. It’s feasible to gather all the validating nodes’ signatures and compile them into a concise certificate. This certificate can then be attached to the Input Block.

Through stake-based endorsing, nodes must verify that the ledger only contains Input Blocks with ample proof that their content is accessible and properly organized. Every node should be capable of downloading the block body corresponding to each block header intended for the ledger. This process essentially acts as a consensus on the availability of data.

To ensure nodes receive information about new blocks promptly, block headers will be transmitted separately from the block bodies. The block header includes basic details about the block, whereas the block body is significantly larger as it encompasses the transactions.

A potential issue arises if a malicious entity generates valid block headers without releasing the associated block bodies. This could cause discrepancies and lead to splits in the blockchain, known as forks. Therefore, it’s crucial to ensure the existence of block bodies for the blocks mentioned.

Several other attack strategies are possible.

To safeguard against protocol burst attacks, measures must be taken. For instance, an attacker might accumulate valid protocol messages and suddenly flood the network with them, temporarily crippling it. Moreover, they could disseminate contradictory information (equivocation) or resend identical messages (replay attacks).

These risks are reduced by implementing a ‘newest first’ approach to message delivery. Each header will include a verifiable timestamp, secured cryptographically. Nodes will then prioritize the dissemination of all headers, focusing on downloading the message bodies with the most recent timestamps.

Let’s go back to stake-based endorsing.

It’s important to note that individual nodes won’t fully validate every transaction in the network. Instead, they’ll verify the proof that a sufficient amount of stake (associated with nodes) supports the validation. This enhances network efficiency, allowing more processing with the same resources.

When a group of nodes that together has a required stake (specified by a threshold), endorses an Input Block, there’s no need for other nodes to re-validate it. The Input Block, along with its certificate, is then ready to be included in the subsequent Ranking Block.

Please note, that the following figure has been simplified for ease of understanding, with Endorsement Blocks excluded.

This architecture facilitates decentralized computation and parallel processing.

The blockchain system organizes Ranking Blocks without being encumbered by the size of Input Blocks or the computations involved, as it only handles references and certificates. These certificates serve as verification that the required amount of stake validated the blocks.

Parallel processing of transactions might cause discrepancies. However, the UTxO (Unspent Transaction Output) model simplifies conflict resolution, as UTxOs are independent objects. Combining various computations that have already been completed doesn’t necessitate another intricate computation. Identifying and resolving these discrepancies is a straightforward process.

As depicted in the illustration, the inputs that modify the global state ( S ) include two separate computations (represented by two Input Blocks) and the existing global state, which encompasses the active set of UTxOs. The outcome of the validation and merging of these computations yields a new global state ( S’ ). The active set of UTxOs is also updated accordingly. As transactions are appended to the blockchain, it becomes necessary to remove the UTxOs that have been consumed (used as inputs by transactions) and to insert new UTxOs into the set.

In the illustration below, it is evident that TX 2 from computation 1 and TX 3 from computation 2 both attempt to use the same input UTxO. Such an action is prohibited, as it would result in the same UTxO being spent more than once.

Addressing this issue is straightforward. The algorithm opts to exclude TX 3 from computation 2 in the updated state S’.

There is no need to re-validate the remaining transactions; that is, there’s no requirement to recompute all other transactions due to this conflict. Since each transaction is autonomous and its sequence in the context of the global state is not important, omitting a transaction while merging computations has no bearing on the outcomes of other transaction computations.

Ouroboros Leios takes maximum advantage of the UTxO model. With an account-based model, the merging of computations would be more demanding on computing power, since discarding a transaction essentially means revalidating all subsequent transactions in the given sequence.

Now let’s explain Endorsement Blocks.

If Input Blocks were generated rapidly, the cumulative delay would prevent their timely distribution to all nodes for validation and also for voting.

It’s impractical for nodes to cast votes on Input Blocks due to the frequency and volume of blocks. Thus, Endorsement Blocks are utilized, which can reference multiple Input Blocks. The stake-based endorsement process is applied to Endorsement Blocks rather than Input Blocks. Endorsement Blocks are also the medium through which votes on data availability are conducted.

Several of the previous images have been simplified. For example, they show that Input Blocks are being voted on.

Endorsing proceeds in two stages to ensure the production of at least one certified EB that covers all honestly generated IBs, and hence the rate of IBs referenced by the underlying ledger matches the proportion of honest parties that are active in the network. The process will be explained later.

Nodes will cast votes on an Endorsement Block only when the references within are verified as valid. This implies that both the block header and block body are accessible, and the transactions contained are valid.

The process of voting on Endorsement Blocks is largely unaffected by how frequently Input Blocks are minted or their size. This aspect is advantageous for enhancing overall network throughput.

To ensure no Input Blocks are overlooked, Endorsement Blocks will have the capability to reference not just recent Input Blocks but also previously certified Endorsement Blocks, which in turn reference older Input Blocks. This mechanism safeguards against potential attacks where a malicious entity might create Endorsement Blocks that exclusively include its own Input Blocks.

Pipelined Protocol Architecture

Leios will employ a pipeline architecture.

Pipelining refers to a process design that allows for the overlapping of various stages of transaction processing to increase throughput and efficiency.

The pipeline is the actual sequence of these stages through which data or transactions pass from one end to the other.

The pipeline architecture allows for different stages of transaction processing, such as validation, execution, and recording, to be handled in a more parallel and efficient manner. This design can lead to faster transaction processing times and improved scalability, as it reduces the bottlenecks associated with sequential processing.

Transactions must undergo multiple phases before being incorporated into the ledger via Ranking Blocks references. Initially, they are minted as Input Blocks, which are then referenced by an Endorsement Block. This Endorsement Block gathers votes and, upon certification, becomes referable by Ranking Blocks. This explanation is an oversimplification.

Furthermore, it must be ensured that concurrent block generation continues uninterruptedly and that no bandwidth is wasted by the protocol while votes and endorsements are collected by the protocol participants.

A Leios pipeline instance comprises seven distinct stages, each with a duration defined by a set number of slots (measured in seconds).

Pipeline instances operate in parallel, with each new instance starting every L slots. Input Blocks and Endorsement Blocks are produced at a consistent rate. Each Input Block assumes the role of a proposer in the latest pipeline instance. Additionally, each Endorsement Block fulfills a linking function and an endorsing function across two separate pipeline instances.
Here’s an overview of the stages:

  • PROPOSE stage: Multiple IBs can be minted concurrently. The pipeline’s objective is to incorporate valid IBs into the ledger, meaning they should be included in upcoming RBs (Ranking Blocks).
  • DELIVER 1 stage: IBs are distributed while adhering to the total delay constraint.
  • LINK stage: Several EBs can be minted concurrently, each referencing IBs from the PROPOSE stage.
  • DELIVER 2 stage: This stage is dedicated to the distribution of newly minted EBs.
  • VOTE 1 stage: Nodes vote on EBs from the LINK stage. These EBs must reference IBs that are available and valid. An EB that receives a sufficient number of VOTE1 votes surpassing a specific threshold becomes VOTE1-certified.
  • ENDORSE stage: EBs reference one another to reach a consensus on the existence and correctness of IBs. This creates a chain of endorsements that verify the transaction sequence. The aim is to group multiple IBs, allowing them to be presented as a single entity. This process facilitates consensus on the blockchain’s state and ensures a uniform ledger view across all nodes.
  • VOTE 2 stage: Nodes vote on EBs from the ENDORSE stage. EBs should only reference VOTE1-certified EBs from the LINK stage. An EB that garners enough VOTE2 votes, exceeding a set threshold, attains VOTE2 certification.

Pipelines interleave to enhance the throughput and efficiency of the blockchain. This interleaving means that while one pipeline is initiated, others are already in progress at different stages. The purpose of this design is to allow continuous block production without waiting for one pipeline to complete before starting another. This parallelism ensures that the network can handle a high volume of transactions and maintain a steady flow of block production.

Blocks are referenced between pipelines to maintain the integrity and continuity of the blockchain. When a new block is minted, it references the previous block in the chain, ensuring that there is a verifiable link between them. This referencing is crucial for the security of the blockchain, as it prevents tampering and ensures that all nodes have a consistent view of the transaction history.

The referencing between pipelines allows for a more efficient consensus mechanism, as blocks can be endorsed and finalized more quickly. It also enables the network to better handle concurrent transactions and blocks, leading to improved scalability and performance.

Ouroboros Peras

Accelerating the settlement process necessitates the consensus of the majority of stakes within the network on newly incorporated transactions, that is, on the updated global state. A voting mechanism must be in place to expedite the finalization of proposed modifications to the global state.

This expedited settlement pertains to Ouroboros Peras. It is founded on a stake-based voting principle, akin to the stake-based endorsing mechanism employed by Ouroboros Leios.
Nodes can vote on recently appended Ranking Blocks in the blockchain, thereby facilitating swift agreement with a new state.

In the event of a blockchain fork, the subsequent block-producing node will be able to determine in advance which chain is favored by the other nodes. The rule for selecting a chain will be adapted to incorporate the consideration of votes.

When participation levels are high, blocks will be finalized swiftly. Conversely, if participation drops for any reason—such as a sudden network outage affecting numerous nodes, possibly due to an attack—the protocol can revert to a Nakamoto-style block finalization approach.
The network will continue to operate, and as soon as node participation (stake) increases, it will automatically resume the rapid finalization of blocks.

Ouroboros with Combined Leios and Peras Overlays

Ouroboros Leios is engineered to reach near-optimal throughput levels, whereas Ouroboros Peras is focused on enabling rapid transaction finalization. Stake-based voting is a common feature in both protocols.

These overlay protocols operate independently, allowing for their separate implementation. It is anticipated that Cardano will integrate Ouroboros Peras before Ouroboros Leios.
Please note that the subsequent image has been simplified for clarity and does not depict Endorsement Blocks.

In the design, each type of block serves a distinct purpose. Input Blocks are crucial for managing throughput, with their minting rate being solely dependent on available bandwidth and computational resources. This allows for the network’s capacity to be used optimally, free from the constraints of network consensus, which prioritizes security and liveness (operational continuity).

Endorsement Blocks facilitate efficient voting for a group of Input Blocks produced within a specific timeframe, guaranteeing the availability of data.

Ranking Blocks reference the payload irrespective of block size or computational intensity. The voting process ensures the swift resolution of transactions.

The two bottom layers (using Input and Endorsement Blocks) are dedicated to achieving optimal throughput, while the uppermost layer (using Ranking Blocks) ensures rapid finalization, security, and liveness.

Ouroboros Leios aims to maximize network capacity utilization efficiently. It ensures that processes with predictable bandwidth and resource requirements are accommodated while optimizing data flow. This optimization is achieved by managing the minting rate of Input Blocks and Endorsement Blocks, along with the associated overhead from endorsing. The result is the theoretical utilization of all available capacity through simultaneous block generation driven by available bandwidth and CPU.

Now, let’s delve into how capacity is utilized under the Nakamoto consensus, without overlay protocols:

Assume a constant network load with a buffer for sudden spikes in user activity. Network resources remain underutilized due to the low minting rate of blocks. Resources experience sporadic usage patterns.

Interestingly, even during network congestion, the available capacity is not utilized. Bandwidth and CPU demands increase as more transactions require validation and dissemination. However, the Nakamoto consensus fails to utilize network resources to their maximum potential. The bottleneck lies in consensus mechanisms, not resource scarcity.

Ouroboros Leios aims for near-optimal throughput.

Ranking Blocks (RBs) will have adjustable size and minting rates. Resource demands, including stake-based voting, are expected to remain relatively constant. These processes occupy a smaller portion of available capacity.

Input Blocks (IBs), serving as data carriers, will be minted most frequently. They impose the highest CPU and bandwidth requirements due to transaction validation and script execution.

Endorsement Blocks (EBs) will reflect the settings of IBs in terms of minting rate, size, and certification requirements. Part of CPU and bandwidth resources will be allocated to stake-based endorsement and certificate handling.

While the resource demands of EBs are similar to IBs, the total number of EBs will be lower.

Transaction diffusion, RB generation, and stake-based voting will exhibit relatively constant behavior, isolated from parallel transaction processing. This predictability is crucial for capacity planning.

The remaining capacity can be dedicated to validating transactions and endorsing blocks.

By adjusting the minting rates of IBs and EBs, along with IB size, the network can utilize available capacity to the fullest.

The limitation lies not in high-level consensus (the blockchain itself) but in available resources. Adding resources (e.g., increasing the number of nodes) can boost throughput.

Ouroboros Leios vs. Sharding

Ouroboros Leios is focused on scaling blockchain protocols to their physical limits, which is distinct from the approach taken by sharding. Sharding divides the ledger state into parts and assigns them to different committees, potentially leading to suboptimal capacity usage within individual shards. However, when combined, all shards aim to approach the maximum possible utilization. Ouroboros Leios, on the other hand, seeks to optimize throughput within a single, unified ledger without partitioning, thereby maintaining a fully replicated blockchain where each full node stores the entire ledger state. This approach is designed to ensure robust performance and enhance blockchain scalability and reliability.

It is theoretically possible to implement sharding in conjunction with the Ouroboros Leios protocol. Each shard could operate an instance of Leios, allowing the system to scale horizontally by distributing the load across multiple shards. This approach could potentially increase the overall capacity and throughput of the network.

However, implementing sharding would require more resources and bandwidth. Each shard would need its own set of nodes to maintain the ledger state, and there would be additional overhead for communication between shards to ensure consistency and integrity of the entire system. The increased complexity could also necessitate more sophisticated coordination and consensus mechanisms among shards.

While sharding can enhance scalability and performance, it also introduces additional demands on resources and bandwidth that must be carefully managed to maintain the efficiency and security of the blockchain network.

2 Likes

I am so impressed bythese designs. All I want to know now is who are the mastermind genius behind these groundbreaking solutions?