Split prioritisation streams inside block creation - Categories of Activity, not Tiers of price alone


I’ve gotten to thinking about what makes sense in the wake of recent MELD and Charles discussion on the topic of a possible fee market or other mechanism to handle congestion on the network during busy times, rapid price movements, and the need for certain fast acting mechanisms such as liquidation events on defi to go through quickly where a delay of even a few minutes could put a Meld or other defi smart contract into the negative beyond its liquidation buffer.

As Charles noted this has been discussed since the beginning of Cardano and blockchain in general and many times since then with a blog in 2021 from IOG.

I liked what Charles was saying and also wanted to put forth my view on the matter.

I think we need to follow 4 values which are outlined in my proposal below.

  1. Fairness, 2. Efficiency, 3. Rapid options, and 4. Equal treatment.

This is done through the 4 categories of activity / streams of transactions selection of A. randomness, B. age, C. density of transfers/size, and D. a fee market segment, each comprising 25% of the size within a given block. All with a contingency factor of randomness within the 3 non-random streams when ‘all else is equal’ for their selection rule sets.

  1. Is this a ‘real problem’? Is it a persistent problem? Context always matters, but there is value in doing this even if we’ve partially addressed these issues in other ways.

With Hydra and other layer 2 and similar solutions coming online, it is unclear to me if this will be a persistent issue. Sometimes we can get too caught up in the way things are right now.

Projections into the future using old tradfi and econ ideas of linear or stagnant systems doesn’t make sense in crypto which is rapidly evolving and is responsive. While it makes sense in general to discuss solutions to high congestion, we need to keep our core values upheld and not sacrifice greater principles for lesser issues which only affect some users or categories of transaction needs.

It is my understanding that hydra and similar layer 2s will give such defi platforms as Meld the options they need to get around this problem. So I wonder about if there is a short sighted aspect to a problem which is already halfway solved with the first hydra head going live.

I could envision a special layer 2 which operates based on a fee model for its services. We could see a small ecosystem of layer 2s which add functional pathways outside of the random award of blocks we have on the main chain.

Side chains also offer these features and if someone doesn’t’ like or needs other rules than what is on the main chain, then they can already go and create their own side chain environment. So between hydra, side chains, other layer 2s and such, I’m not sure we have a problem which will not go away over the next year.

But to be fair I think the idea of categories or rules makes some sense and could improve the main chain.

  1. Whatever the solution, it cannot be based on fees alone! We already know that doesn’t work and is a race to the bottom for TPS metrics which doesn’t solve main chain issues, as with ETH.

If we make some updates for our layer 1 with some changes for a tiered system in Ouroboros or if we were to instead think about a mixed layer 2 solution where there are varied rules, I think a diversity of approaches will be vital.

A diversity of equally treated and weighted transaction categories on the main chain would offer flexibility without sacrificing stability or low fees for most people. I put forward one possible system and set of rules below.

I would put an equal weight on each category by block size and I’d think it vital we create several categories instead of tiers, as a semantic issue to name what we are doing. A tier implies a hierarchy, while a category of block creation implies a more simple ‘difference’ and makes it clearer what it means.


A. Randomised selection - this must remain, the current sortition model is great and avoids any bias. As such it must remain with a flat fee.

B. Age - it is essential that transactions do not get stuck or delayed for a long time by chance, so a continual sweep for the oldest transactions could be done.

I don’t know what we can determine in an arbitrary/democratic sense as ‘old’, but let’s call it for the sake of argument any transaction older than 1 hour / x number of blocks which has not been picked up by any of the blocks in that time. This will fall into the ‘oldest transactions’ category and will have an intentional sequence with the oldest ones going first. Again with the standard low flat fee. And the contingency when no older transactions are around or too many are around of similar age/block age to fit into the current block, then a random selection will be applied to similar transactions or in the case of there being no older ones.

C. Price / Fee Market - this can be an open market where transactions are put through based on how large of a fee the initiator of the transaction is willing to offer. This model is fraught with problems as we have seen in ETH and I think the key key key factor here is to limit this within the system. If we end up with 4 categories, then the fee market can only comprise 25% of any given block. If we have 5 categories, then 20% in an equal weighting of categories.

This offers those who ‘neeeeed’ to get a fast transaction the option, but it avoids the problems of fee markets making regular transactions unaffordable for everyone else or for those willing to pay more to be able to consistently front run everyone else.

In this way if no one/not enough to fill the block’s worth of transactions happens to be bidding higher than the standard flat fee, then this category can also act in the way things are now and will revert to a random selection to fill up its 25% block size allocation. If there is no congestion, most users and smart contract defi operators wouldn’t choose to pay more than the standard flat fee used in all the other categories. But they can if they want to do so.

So we need to have a contingency of randomness for every category, fees, size, age, and any other category of activity or streams of transactions which are implemented.

D. Size of transaction/number of transfers of value per size of space in the block requested - I think this is also an important category and can help to keep day to day smaller/simpler transactions for regular users moving forward. If there are 20 wallet to wallet simple/small transactions which could fit into the same block size as 1 very large smart contract transaction, why would those 20 users have to wait for that other 1 user? This supports the principles of efficiency and equality amongst all those who initiate transactions.

This category would be focused on efficiency and throughput for the most people. Obviously things like hydra and other layer 2s or ways of bundling those smaller transactions make sense and Cardano also has the eUTXO model to address this issue in part where a transaction isn’t just one transfer of value, but I still think a size prioritisation could make sense.

The contingency of randomness also applies for size, we could have size baskets and say we are looking at the smallest size and there are 1,000 transactions of this size in the queue, but only room for 500 of them in the current block. In this case they would be randomly selected. This is fair and is not done by age or fee offering above standard rates or other factors which are addressed by other categories.

Overall for these categories of activity or streams of transaction queue selection rules:
This means that if no one is offering a higher than standard fee, no transactions are old, and there are an abundance of small transactions, then the overall state of the system will tend towards the current random selection of transactions from the queue. This means we have augmented the network and added value without sacrificing anything.

Perhaps there are other meaningful categories I’m not thinking of, but the point would be to keep it relatively simple so that these selection rules can operate in a fairly transparent and easy to understand way which is technically simple to implement. Complexity and a lack of understanding begets onboarding problems for regular and new users and it can also lead to unintentional exploits or arbitrage possibilities for others.

How will this work? Do we rotate blocks where each set of rules gets its own block in a sequence? I don’t think that would work well as part of the value is to created by offering rapid settlement for the fee market category.

I’d think we could have a set of rules for all stake pool operators to use which streams the transaction queue using a sequential set of rules applied to each and every block created which allocates a portion of the block size for each category of activity.

The first rule would be ‘is there a higher fee offer’ than the standard flat fee based on transaction size. This splits off the fee market category first and then all other transactions go into the other streams of sequential rules or sorting of the transaction queue. If we have the 4 categories of random, age, size, and fee which I’ve proposed, then each would get 25% of the size within each block.

So step one splits off 25% of the block size for fee based transactions, then the other 75% go into a set of rules which carve out their own blocks based on how they operate.

This could all happen in milliseconds for the randomly selected SPO who won that block using a common set of rules as each block is created.

The sensible way to do this in my mind is to do the following sequence.

  1. Fees split off - they go fight it out for their 25% of the block based on who pays the most and are easily selected.

  2. Age rules are applied and the oldest 25% of transactions are picked up for their proportion of the block.

  3. Size rules/density of transfers apply and the smallest 25% of transactions by size go into the next block. This could be done in a more intelligent way to be based on metadata showing how many total transfers of value, the number of transfers not overall value of ADA, are contained within a transaction. i.e. a medium sized transaction which has 1,000 user’s worth of transfers would get a higher priority than a simple single wallet to wallet transactions based on the density of transfers contained. The principle here is efficiency, so perhaps size isn’t the right word. This could be a little tricky and arbitrary to some degree in the details, but a solution we can agree on could be found.

4 Random selection rules apply to grab transactions from the queue for the final 25% of block size. They could be old, big, small, or even have a fee offering which wasn’t picked up in the fee market rules yet. Note this means you would still pay the higher fee you offered even though you were picked up in the random stream…you had your chance to win the fee market and lost in step 1, but that doesn’t mean you pay the standard flat fee.

I think this would add a lot of value with a few categories to serve different needs.

We need to offer a fee market for those willing to pay more. We need to offer a timely experience by assuring old transactions will not be left behind due to chance. We need to offer an efficient system which is where small sized transactions/highest density of value transference is going through to service more people faster. And finally we need the ultimate rule of fairness which is random selection.

Anyhow I hope people see this and it is a worthwhile consideration and addition to the discussion.

I love you all, long live Cardano! Peace for humanity.

A minor addendum or additional thought.

There can be some wiggle room in the 25% allocation for each category. If we see a transaction which is 5% of a block size and the fee market order selections transactions such that there is only 4% left out of the 25% block size after the last in the sequence of selections by fee, then the extra room required to fit in that one last transaction would be taken away from the purely random category which we drop to only 24% for that block.

But this would be limited in scale and there would be the possibility of skipping a large transaction which doesn’t fit for the next highest fee below it in order to fit the fee maret’s selections into the block. It would be semi-arbitrary, but we can pick a tolerance size for when the wiggle room applies and when the item is skipped. I’d think 1% or less and this would avoid the purely random category from ever dropping below 22% if all 3 of the others somehow used up their 1% size tolerance overflow.

I’d also go for a direct and hard fee ordering with no choices for SPOs. This is not a maximal value game for the SPO. Say the SPO sees a series of slightly lower fees which they could pick to gain the most value for themselves, this is not allowed. They don’t see anything or choose anything, the common set of rules are simply applied and if the SPO didn’t follow the rules, like now, then the block would fail and they’d get no reward and a penalty for future block selections for a time.

These are just a new set of automated rules which apply to block creation and SPOs passively apply them when they win a block.

No other consideration or logic would apply such as size of transaction or total amount of fees within a block. This would avoid the problem of SPO optimisation of fee collection…it is a flat rule with the highest fees being sequentially selected in this segment. When two fees are identical, the rule applied for selection is the randomness principle without regard to any other factor. With the minor except for size in the case of an overflow at the tail end of a block’s 25% fee market space.

The rules for selection would be fixed and automated to be applied to fill up the blocks with many contingencies and logic rules to ensure we never get stuck. When all else is equal or anything is unclear, randomness applies. To avoid issues, we only ever take one step away from randomness as the key value add for each category.

The only rule in the fee market sement is highest fee, no other criteria can be applied by the SPO for selection. This keeps things moving along with a hard set of rules and keeps each category pure and purposeful in its intention.

Each divergence from purely random selection is a cost of a core principle of the system and we should only take 1 step away. Be it age, density of transactions, or a higher fee being offered.

Doing it this way avoids the problem plaguing ETH and leverages values and principles to guide the technology and systems we use together.

Warm Regards

I believe this is the blog article you are thinking of? I had bookmarked only one on this subject:

FYI 1 year later we had a CIP submission formulating some of these ideas which has currently been stalled for a few months (its authorship & most recent criticism were both from IOG):