PCP_max_tx_ex_mem_PiLanningham

Here’s a write-up of the motivation and evidence for increasing either transaction or block memory units in anticipation of increased DeFi activity.

This is not mutually exclusive with increases to other parameters, but in my opinion, memory units have the best impact to risk ratio.

(shared as a google drive document, since I don’t have permissions to upload a PDF file :sweat_smile:

Regards,
Pi Lanningham

13 Likes

Here are some charts from December 19 2023 data (a typical heavy-load day) to help with the discussion.

Note that only blocks and transactions containing Plutus scripts are included.

Overview

Block Load

Block Size

Block Memory Units

Block Execution Steps

Transaction Load

Transaction Size

Transaction Memory Units

Transaction Execution Steps

SQL Queries

Blocks

COPY
  (SELECT block.time,block.size/90112.0 AS size,
          sum(unit_mem)/62000000 AS mem,
          sum(unit_steps)/20000000000 AS steps
   FROM tx
   JOIN redeemer ON tx.id=redeemer.tx_id
   JOIN BLOCK ON block.id=tx.block_id
   WHERE block.time > (NOW() AT TIME ZONE 'UTC' - INTERVAL '24 HOUR')
   GROUP BY block.id
   ORDER BY block.id) TO '/tmp/blocks.csv' csv header;

Transactions

COPY
   (SELECT max(block.time) time,tx.size/16384.0 AS size,
          sum(unit_mem)/14000000 AS mem,
          sum(unit_steps)/10000000000 AS steps
   FROM tx
   JOIN redeemer ON tx.id=redeemer.tx_id
   JOIN BLOCK ON block.id=tx.block_id
   WHERE block.time > (NOW() AT TIME ZONE 'UTC' - INTERVAL '24 HOUR')
   GROUP BY tx.id
   ORDER BY tx.id) TO '/tmp/txs.csv' csv header;
1 Like

Also some additional charts comparing scripts Plutus version by number of transactions and size of transactions:

Cardano Scripts Plutus Version by number of transactions

Cardano Scripts Plutus Version by size of transactions

1 Like

Related as well:

1 Like

Here is one thing that concerns me about any proposed change that might increase propagation delays:

The effect of increased propagation delays is felt disproportionately by the more physically decentralised pools. This is because pools, like mine in Australia, take slightly longer to receive blocks and the effect of this becomes quite massive at the 1 second demarcation.

Consider that most pools are located within the EU and USA with many co-located in data centres owned by a big tech company. These pools all have propagation delays less than 1 second between themselves. On the other hand, a pool in Australia, Africa, Papua New Guinea, or Indonesia might take just over 1 second to receive these blocks. This means that the PNG or Indonesian pool will get 3 (three) times the number of “fork battles” as the pools in the EU/USA data centres.

The pools with less than 1 second delays only get “fork battles” if another pool is awarded slot leader for the exact same slot, whereas the physically remote (physically decentralised) pool, suffering just over 1 second delays, will have fork battles with pools awarded the same slot, or the slot before, or the slot after. The physically more decentralised pool will have 3 times the number of “fork battles” and it will lose half of these battles.

The chance that any other pool is awarded a particular slot is 5%. Consequently a pool in an EU/USA data centre can expect to have around 2.5% of it’s blocks orphaned. Whereas a pool with just on 1 second propagation delays can expect to have 7.5% of it’s blocks orphaned.

IE. Physically remote pools suffering 1 second delays get unfairly punished by the protocol by getting 3 times the number of their blocks orphaned.

I think people need to consider what decentralisation means for them and this unfairness should be fixed because otherwise it causes a centralisation force for everyone to move their block producer to a data centre in EU/USA owned by Amazon.

1 Like

Yes, exactly; this is why the analysis of factors is so sensitive, and why I’m personally opposed to a block size increase.

I think that my CIP is the most effective way to “increase” the free block space, and that increasing the memory units will have very little impact on block propagation.

2 Likes

I am not necessarily opposed to a blocksize increase but I think we should also consider the following:

  1. Increase the slot duration to 2, 3 or even 4 seconds (which is still less than the security assumed limit of 5 seconds).
  2. Settle “fork battles” using the block VRF only when the slot number is identical.

The combination of both these suggested changes achieves the following:

  • Makes it not possible to game the VRF function as a method to maliciously cause the orphaning of the previous pool’s block.
    Currently it is possible for a block producer to look at the previous pool’s block VRF and decide whether he should build upon this previous block or the one before it. Since using the VRF to settle fork battles is deterministic, he can easily decide which block to build upon knowing whether he will win the “fork battle”. If he deliberately causes other pool’s blocks to be orphaned then his stake pool will earn a higher yield for delegators relative to the average. Consider that a coalition of such malicious pools could work together to advantage their members by knocking out blocks made by pools that are not members of their group.
  • Removes the centralising force inducing stake pool operators to move their block producers into data centres owned by BigTech in order to reduce propagation delays (so they suffer less fork battles and thereby earn more rewards).
2 Likes

Changing the slot duration would likely break a dramatic amount of code (which makes assumptions about the slot length) and native scripts (which use slot numbers, rather than Posix times). In practice, the slot duration will likely never change again.

Furthermore, I’m not sure the suggested changes achieve the goal you outline, without compromising the security of Ouroboros. I know that the design there is dependent on some very subtle arguments, so we’d need a proper, formal analysis of the change.

Regardless, I think both are orthogonal to this parameter change request.

1 Like

Also, to be clear, this proposal is also not arguing for a block size increase. It’s arguing for an increase to transaction memory units, which I believe will have very little impact on the block propagation.

3 Likes

@Quantumplation that is a good point. You are in a better position than me to propose technical solutions to this additional problem I have pointed out, and I am grateful to have you thinking about it.

I can see other ways to achieve the objective of ensuring that it is only possible to have a block every say 3, or 4 seconds. One possible solution is that slots could still be numbered every second, but only every say 3rd slot is a valid one that could have one or more leaders. Maybe all that needs to change is adding a mod 3 into the leadership calculation somewhere and then re-calibrate the parameters so that there is still on average 1 block every 20 seconds. I understand that this might require a majority of pools to upgrade to a new version first before the change is activated.

The problem statement is:

  1. It is currently possible for a coalition of block producers to game the system by deliberately creating forks where they know they will win the “fork battle” and cause the previous block to be orphaned. A malicious coalition is currently able to pick and choose which blocks they orphan.
  2. It is currently a disadvantage to geographically decentralise your stake pool if your propagation delay will be over 1 second.

If we are serious about decentralisation then we need to tackle such problems, particularly ones that are easy to technically solve. Does decentralisation mean having stake pools in the corners of the earth where it might take 1 second or more to send/receive a block? Do we want stake pools housed in Africa, Aus, NZ, PNG, or even in space? Or should every pool be in USA/Europe? What happens if data centre providers choose to selectively manipulate network delays depending on which pools they like?

1 second is a pretty small time, yet it triples the orphan rate from 2.5% to 7.5%. 2 second delay increases the orphan rate to 12.5%. Such a massive change in orphan rate completely dwarfs the variable fees set by nearly every single stake pool. That is one hell of a reason for a stake pool to give in on geographic decentralisation and move their block producer to an Amazon data centre.

1 Like

Regardless, this is off topic, and I would suggest you either create a CPS (Cardano Problem Statement), or a separate forum thread.

2 Likes

Hey, come on Pi. I don’t think my concern is off-topic.

Stake pools housed in Australia, New Zealand, and other countries are teetering on the edge of the 1 second chasm. Anything that tips them over that 1 second delay increases their orphan rate by 3 fold from 2.5% to 7.5% which can make them uncompetitive and force them to shut down.

In one of your earlier responses you seemed to dismiss my worries by pointing out a technicality. I then responded pointing out a possible simple solution which could address your technical reason.

I currently notice that my block delays can increase significantly, simply when the blocks become predominantly more full. IE: Without any increase in the memory unit budget or other limits, but simply with increased chain activity.

Even though I worry that an increase in the memory unit budget might further increase block delays by enabling more chain load, I am not opposed to your proposal. Nevertheless, I think Cardano needs to also address the problem I have outlined, because I don’t think just increasing the block memory units is going to be sufficient if DeFi activity really heats up. I think the next suggestion from yourself, or others in the community, is going to be a further increase in block size.

One day Cardano might be grateful to have some stake pools housed on the other side of the world in Australia and New Zealand, where it can take over 1 second to receive a block. Forcing these pools to relocate in an Amazon data centre to survive, will make Cardano more centralised and fragile.

I don’t think your proposal is irrelevant, but it’s certainly off topic for this thread, which is about whether to change the transaction memory units.

I wasn’t dismissing your concerns, just pointing out that changing the slot length is likely impossible. Please don’t put words in my mouth or ascribe motives to me that aren’t there in the text.

If you open a dedicated thread on this topic, I’ll happily help you explore the ramifications of different solutions.

If you want to make an argument for why the memory unit limit will have an impact on propagation delay, that would be on topic for this thread, but I don’t think the data bears that out. It makes sense why block size would have an impact on the block propagation times (it impacts the download time of the block which dominates the propagation time), but given nearly 2 years of data, it doesn’t seem like adoption times and memory units are strongly correlated.

1 Like

I think I did.

1 Like

And, I made a forecast that your proposed solution may not be enough:

Which if this forecast eventuates would cause me more concern.

1 Like

Tone can be difficult to convey in text, so just know I’m not trying to be snarky. :sweat_smile: I appreciate the concern, and I’m trying to engage with it, and I don’t think the concern is terribly warranted based on over 2 years worth of data.

However, this concern alone isn’t really an argument, it’s a hypothesis. I’d be interested if you have any data to back this up.

My proposal includes links to quite a bit of analysis that shows that there is a small, but not significant or rapidly growing correlation between memory units and propagation time (certainly much less of a correlation than other parameters that could be tweaked).

The chain is already being pushed to its limit. It is my opinion that the data shows that raising this limit wouldn’t cause stake pool operators to experience significantly longer delays, but would be able to get more useful economic work done with the blocks it did produce.

That is, I believe this would allow the existing blocks to “work smarter, not harder”, whereas raising the block size limit would be the opposite.

Again, please try not to put words in my mouth, it is really exhausting to correct. I’ve repeatedly said, here and elsewhere, that I don’t think raising the block size would be an effective solution, as it would just be immediately filled by more Plutus v1 scripts. That’s why I proposed this change:

Which would free up significant (existing) block space.

However, nearly every DeFi protocol right now is bottlenecked by memory units, by nearly half.

Increasing memory units lets us fit more useful work in the same space, potentially by up to a factor of two.

So, combined, these represent a 40% savings in our existing block size footprint, and a potential near doubling in throughput for existing DeFi applications.

You’re certainly correct that there is still an upward ceiling and even this increase could become strained, but in my opinion that doesn’t mean we shouldn’t pursue sensible changes that don’t undermine the security of the protocol. That boils down to a slippery slope argument that fails, because i’m already agreeing that we shouldn’t make large sacrifices in propagation delay or global inclusiveness.

If you have data, or conduct an experiment that falsifies any of the above, or a benchmark that identifies what the critical inflection point where increasing memory units would be dangerous to block propagation is, please do share! That’s why I started the process to discuss the changes. But continuing to bring unsubstantiated worries alone isn’t really going to be as productive, unfortunately.

2 Likes

And again, I’m not trying to be combative; If you want to schedule a call to go over the data and talk about potential risks and solutions, I’d be happy to make some time available!

2 Likes

I do apologise for suggesting that you might later ask for a block size increase. I had forgotten reading that you had stated your opposition to that idea. Nevertheless, I remain concerned that once AXO and other DeFi initiatives launch on mainnet that some users will be asking for it.

I think your proposal is well researched and I am in favour of it.

Maybe if I had your technical proficiency I might be able to conjure up such a test to see what happens when memory units are increased. Unfortunately I don’t have such skill.

For one of my relays, and my current block producer, I use a couple of ARM machines which have low processing power being only the equivalent of a couple of raspberry pis with more RAM. This, combined with being on the other side of the world, is the reason I am quite sensitive about this topic.

I do send my block delay data from my block producer to pooltool. Pooltool receives similar data from other providers about the blocks I produce. These are the graphs that pooltool displays for my pool. For some reason producer delays (762ms) are higher than receiver delays (717ms). You will see that there is quite a tail of blocks that are already over the 1 second limit in both directions.

But this is better than I remember it was in late 2022 and early 2023. I presume the improvement has resulted from:

  • Cardano-node improvements - pipelining etc.
  • More pool operators running their relays in P2P mode.
  • Less load on the chain.

I have a continual monitor running which outputs the block delay for each block when received. This delay information includes the time it takes the node (block producer) to verify the block because my script measures the time until the log message “Chain extended, new tip”. This is the information I send to pooltool.

1 Like

I don’t understand enough about how an increase in the memory units affects block verification times. But I naturally assume that increasing the available memory units will allow more smart contract executions in each block and therefore this will increase the amount of verification required per block.

With that in mind, here is some data comparing a couple of low power arm64 machines, with a faster amd64 machine, and a Contabo vps:

BP (ARM64)

These are the logs on my BP (in Australia) for a recent block:

Forge:Info:678] [2024-01-18 22:00:43.09 UTC] fromList [("credentials",String "Cardano"),("val",Object (fromList [("kind",String "TraceNodeIsLeader"),("slot",Number 1.14048952e8)]))]
Forge:Info:678] [2024-01-18 22:00:43.30 UTC] fromList [("credentials",String "Cardano"),("val",Object (fromList [("block",String "79f3486bfe292da9eb47cdc92f7b495ccb467a7357ee4b32dae1bd724979d59b"),("blockNo",Number 9821302.0),("b>
ChainDB:Info:664] [2024-01-18 22:00:43.55 UTC] Valid candidate 79f3486bfe292da9eb47cdc92f7b495ccb467a7357ee4b32dae1bd724979d59b at slot 114048952
ChainDB:Notice:664] [2024-01-18 22:00:43.55 UTC] Chain extended, new tip: 79f3486bfe292da9eb47cdc92f7b495ccb467a7357ee4b32dae1bd724979d59b at slot 114048952

The times from the slot start were:

  • 90ms, log confirming it was leader
  • 300ms made it’s block
  • 550ms checked it’s block, determined it was a valid candidate, and extended it’s tip

IE: The BP took 250ms to check it’s own block body contents, which it didn’t need to download because it produced it.

relay1 (ARM64)

My relay1 is right next to the BP with cardano-cli ping times of 1ms and 10Gbit connection:

[2024-01-18 22:00:43.30 UTC] [TraceLabelPeer (ConnectionId {localAddress = 10.x.x.7:2700, remoteAddress = 10.x.x.9:2700}) (Right >
[2024-01-18 22:00:43.53 UTC] Valid candidate 79f3486bfe292da9eb47cdc92f7b495ccb467a7357ee4b32dae1bd724979d59b at slot 114048952
[2024-01-18 22:00:43.53 UTC] Chain extended, new tip: 79f3486bfe292da9eb47cdc92f7b495ccb467a7357ee4b32dae1bd724979d59b at slot 114048952

It received the block almost instantly and was able to extended it’s tip at the 530ms mark after the slot start, which was even quicker than the BP extended it’s own tip. (Pipelining works well and downloading the block body was almost instant.) So this relay took around 230ms to check the contents of the block (assuming 0ms body download time). I presume the BP was a bit slower at checking the block body due to some extra CPU load causing a context switch delay?

Note that both this relay1 and the BP are running on ARM machines with raspberry pi 4 equivalent processors but with 24GB and 32GB RAM respectively.

hidden relay (AMD64)

My hidden relay is also right next to the BP, but it is running on a AMD64 machine with a faster processor than the 2 ARM machines. It has cardano-cli ping times of 1ms and a 1GBit network connection. It’s logs showed:

[2024-01-18 22:00:43.39 UTC] Chain extended, new tip: 79f3486bfe292da9eb47cdc92f7b495ccb467a7357ee4b32dae1bd724979d59b at slot 114048952

It extended it’s tip after only 390ms. So it took only 90ms to check the block body contents (assuming 0ms body download time). But, if 10ms body download time is assumed then it took 80ms to check. It’s processor is around 3-4 times faster than the ARM machine ones, based on comparing cardano-node restart times. So 80-90ms sounds about right.

relay3 (Contabo vps)

My relay3 is in USA (Contabo vps) on the other side of the world from the BP and has cardano-cli ping times of 219ms:

[relay3:cardano.node.BlockFetchDecision:Info:324] [2024-01-18 22:00:43.41 UTC] [TraceLabelPeer (ConnectionId {localAddress = 144.126.157.46:2700, remoteAddress = 180.150.102.25:2700}) >
[relay3:cardano.node.ChainDB:Info:312] [2024-01-18 22:00:44.20 UTC] Valid candidate 79f3486bfe292da9eb47cdc92f7b495ccb467a7357ee4b32dae1bd724979d59b at slot 114048952
[relay3:cardano.node.ChainDB:Notice:312] [2024-01-18 22:00:44.20 UTC] Chain extended, new tip: 79f3486bfe292da9eb47cdc92f7b495ccb467a7357ee4b32dae1bd724979d59b at slot 114048952

This relay logged that it had received the block header and started downloading the block body directly from the BP 410ms after the slot start and it took until 1200ms after slot start to fully download the body, check it’s contents, and extend it’s tip. Which is just slightly less than what pooltool averaged for the block, as the propagation delay (1250ms) reported by other Cardano nodes.

The logs don’t indicate what proportion was spent downloading the block body and what proportion was spent verifying it’s contents. Maybe 100ms - 150ms to verify and 650ms - 700ms to download the body would be my guess because I think the Contabo vps processor would be somewhat quicker than my ARM machines but maybe a bit slower than my dedicated AMD64 machine.

In other words, this agrees with @Quantumplation’s contention that the majority of the delay occurs in downloading the block body. IE: Increasing block size will be the major contributor to propagation delays, but computation time is also important particularly for lower power machines. Specifically, my ARM64 machines will download the block body equally as fast as my AMD64 machine as they can saturate their 10GBit network connections (not processor limited). But the ARM64 machines take 150-170ms longer to verify the body contents currently. And, I assume that verification times will increase if the memory units are increased.

Already beyond the 1 second cliff

As far as fork battles go, the important time for the next block producer is the time until it extends it’s tip, because it will build the next block upon it’s current tip. Other relays incorporated this block into their chains 1250ms after the slot start, which is 250ms after the start of the next slot. Therefore if there had been a valid slot leader for this next slot, then my block would have suffered a “fork battle”.

1 Like

Here’s a reason why increasing execution memory limits might increase computation usage also. As it stands, computation and memory usage for most programs are almost perfectly correlated. It happens that programs tend to hit the memory limit first, but that means that if we raise the memory limit then their usage of both memory and computation is likely to go up.

This shouldn’t matter for security because the computation limits should be set to a safe level, so if we get closer to that it shouldn’t be a problem. But it might matter if the in-practice propagation times disadvantage certain geographies as is being asserted here (which is perfectly compatible with the protocol’s security guarantees being fine).

(I also believe it’s true that the script execution time allowance is small relative to the overall block processing time, so it shouldn’t make that much of a difference overall in any case.)

3 Likes