How long does it take to sync a new cardano-node server running v1.29?

Is it normal to wait over 48 hours for a new cardano node to sync the block chain these days?
I have a 1 gig internet connection, static IP, 4 cpus, 10 gig ram, SSD storage.
CPU utilization is about 50% and memory utilization is about 75%, disk capacity consumption is only at about 30%, all of which appears nominal.

Internet connection is healthy (no packet loss, expected speed test results, etc…) and sync is at about 97% since starting the new node on Friday (about 48 hours now).

I’ve read elsewhere in posts about a year old that new cardano-node sync could take up to 4 hours.
I get that there are more users/nodes on the network today and thus the chain is obviously larger since then. So I expect that it would take longer…but how much longer I wonder?

So kind of looking towards to community regarding what I should expect based on your experience with syncing new nodes today. If it’s normal that it can take a few days that’s fine, I just need an idea of what is normal behavior so I’m not assuming things are fine when maybe they technically are not.

There is much more data now. Mine ran about 12h… Btw 10GB RAM is just on the edge.

Thanks for the insight. Were you running v1.29?

I forgot to add that this is running on mainnet.

I assume if memory was inadequate it would start getting into swap.
Since I started the nodes there has been no swap utilization.
Seems like memory has been holding at around 6.5 gig.

Maybe v1.29 is better at managing memory then previous releases?
Or are you suggesting it will run faster if the server had more memory available?

1.29.0 actually introduced a memory usage increase.

There are a few related topics on the forum. Try looking them up:
ie:

1.29.0 is by default configured to use 2 CPUs. There are ways to configure it to use more CPUs with the RTS and -N flags. I think the bottleneck is the processing power while syncing. IIRC using htop you should see some cores running 100%.

Usually when RAM becomes the problem the process will be killed because OOM, which you can observe in the logs.

My nodes run at about 8.2GB RAM usage currently.

1 Like

The server is doing nothing but running cardano-node.
The following is what I normally see on average but sometimes a core will bump up to like 98% for a few seconds then back down to around 50%, and it can be any of the four cores and I think usually only one. I did notice after having stopped and started the service at one point a single core went to 100% until the service was fully up and running. I suspect/guessing/assuming a cardano-node function/process is opening the DB (the single 100% CPU) prior to starting network services given it does this all the while before the node.socket file is created. Then after node-socket is created processing appears to distribute across CPU cores. In run-time all cores undulate in similar ranges (30 ~ 60% utilization) so they seem to be processing the same type of load.

I haven’t seen any evidence of unreasonably high memory utilization.
What I see in htop would be considered good healthy memory utilization in my experience with servers. Work is done in memory…what is important is that memory utilization never makes it to swap for best workload performance.

Sounds like other folks have had issues of high memory utilization which is not my issue in this case.
Or are you suggesting that maybe it should be more aggressive then it is?

I think I’ll look into the RTS and -N flag angle to see if dedicating more then 2 CPU’s is a better way to go. For the record I’m running pre-compiled v1.29 binaries. So not sure what it was complied with. I wonder if there is an option to use all/any available CPU’s?

This is basically how it’s looked since cardano-node started syncing.

This is how it looks when cardano-node is not running.

So this kind of leads me to believe/assume the pre-compiled cardano-node binary is using all CPU’s.

I’m now at 98% synchronization.

In general it seems like sync is just kind of slowing down as it get’s towards the end. Guessing it’s related to the volume of data on the block chain at the given era/block it’s processing. Hours ago it was at 96%. Maybe there are resource utilization limits placed on the pre-compiled cardano-node binary to better ensure service uptime? I’m OK with that if it’s true, but would be nice to know if there is a post-compile way to tune the service.

Maybe the pre-compiled binaries are actually optimal?

I’m running Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz CPU’s

It’s an older CPU type but I believe it’s still relevant for this type of simple work load.

I performed cardano-node +RTS --info and it appears as through it’s compiled to use 2 CPU’s. So maybe I’ll try overriding it to 4 CPU’s and see what happens.

image

Looks like I can try overriding cardano-node to tell it to use all available CPU’s by not specifying a number on the -N parameters.

source GHCRTS=-N

I’m going to let my node finish it’s sync as configured.

I may spin up another VM at some point and try doing a new sync with override parameters to see if I can improve sync performance without causing swapping or creating runtime stability issues.

Maybe I should learn a bit more about the Haskell environment.
https://downloads.haskell.org/ghc/latest/docs/html/users_guide/using.html

So I set Environment=“GHCRTS=-N” in my cardano node systemd service file and started it up.
My expectation is that it overrides the -N2 default and causes the service to use all available CPU cores.

After cardano-node did it’s normal startup peg of a single CPU core it flipped and definitely appears to be running with higher CPU utilization across all CPU cores.

I’m going to run this one for a bit to see if sync finishes up quicker then I’ve been seeing.
I may also spin up another instance with the same hardware config and start a new node sync to see if there is an improvement in sync performance with the new CPU setting. Or at least it did at first. After awhile it seemed to drop down to about a 60% average…but a bit more uniform utilization then I was seeing with the default -N2 parameter. However, I’m seeing higher memory utilization then previously.

I guess to actually know for sure if there is any benefit to running -N over -N2 I’ll need to setup a new cardano-node and see how fast it syncs with -N. I’ll post back here if I end up carrying out that experiment.

Thanks for sharing all the findings. It’ll be helpful for others for sure.

1 Like