ERROR: ledger dump failed/timed out

12 mins ago

Ok something it’s killing ur node… how much memory do u have?
Can u restart the server?

It’s late here, I should sleep. Will check tomorrow. I will try tomorrow.
Appreciate you help and dedication

1 Like

I have the exact same issue. sudo reboot causes the server to re-sync, but when selecting Pool->Show-> _ARM__Tech Donations it causes the error message to occur once again.

Hosted at DigitalOcean, 4gb/80gb ssd

Hello,

And also u are using cncli? Try to stop or try to start it: nice -n 19 cncli


Andrew Westberg, BCSH
@amw7

·

Feb 11

Pro Tip: If you’re running this on the same machine as a node, be nice to your CPU and run it with: nice -n 19 cncli …

Hi there,

I got the exact issue. If I run ‘Show’ under Cntools, it gives
‘ERROR: ledger dump failed/timed out
increase timeout value in cntools.config’ this error and restart the node. I increase the timeout 300 to 3600, didn’t change a thing.

I am also running Digital Ocean with 4gb/80gb ssd.

My educated guess would be HW insufficiency as I had many difficulties with this tier of servers… Highly likely memory…

Might be a noob question but is the pool still running if we can’t execute Show under Cntools? Do we need to launch that command or can it be ignored?

@Alexd1985 - I’m not running cncli just cardano-cli.

Of cours it should work! That command only check the status of ur pool

I see other people has the same issue, not working - pool-show

Oh fantastic - any issue with never launching that command?

Also, my pool is registered but it’s not appearing on adapools.org - anything in particular that needs to be done or are we just waiting for a refresh on their end.

U searched by ticker; can u search by pool id?
Cheers,

Gotcha - yep searching by pool id pulls up the page but none of the other info has populated (name, website, etc.) - any way to update those fields?

Could be a problem with metadata… inside the metadata file u added infos like pool name, ticker, pool description, relay, etc

Check ur metadata

I’m getting exactly the same error, after the pool modification.

The change that I made was just changing the owner/pledge address to a mnemonic waller (to allow me to vote with my pledge). After this change I notice a big increase in the memory/CPU consumption every time the script cnode-cncli-leaderlog.service runs.

After some investigation, I notice that the increase in memory/CPU result in process shutdown (probably killed by the system), and it shutdowns the node also. Once the cnode.service haves the auto-restart, the node start against, the leaderlog run again and it goes down again. Basically, this is happing over and over again. (image in attachement)

My temporary solution for that was to disable the cnode-cncli-leaderlog.service, the pool runs smooth like before (before the pool modification mentioned above).

It seems that the leaderlogs script falls in some stuck process that raises the crazy memory/CPU increase, and probably is something related to the modification that we do in the pool.

I hope that helps to find out what’s going on. Thanks in advance.

I’ve 2 Cores, 4GB Mem in this node, my pool is the HYPE Pool.

strong text

Check this tweet

Didn’t work to me, I already update cncli to the version 1.3.1 and the issue is exactly the same.
Regarding the nice command I don’t understand how to use it using the systemd services provided by cntools. Actually, I already check the NI in the htop and I think is setted to 19 by default to cncli.

1 Like

Marco - I have the same issue… I had to disable the automated leaderlog service as it was crashing my producer node every 60 minutes.

When I attempt to run ./cncli.sh leaderlog manually it runs for a lonnnggg time but then always errors out with ### ERROR: ledger dump failed/timed out. This same outcome even when I run: nice -n 19 ./cncli.sh leaderlog

I also increased my swap file size to 10GB today to see if that would help but it did not… I am not sure what we can do or how to resolve, but please let us know if you come across a solution.

Update! I got it to work… Marco, hopefully this works for you. I have not tried re-starting the automated leaderlog service, but for now I am happy with this result:

  1. Utilize a swap file if you are not already. I increased my 4GB to 10GB today.
  2. Edit cncli.sh and remove the “#” for the timeout line and replace the 300 with 3600
  3. At command line, run: ./cncli.sh leaderlog

Hope that helps!

Yes, for me it’s clear that is a memory issue.

At least 2 Cores and 4GB memory is not enough anymore (to run the leaderlogs service). Seems from the last epoch the consumption of memory for that processes is increased.

My solution was to run the leader logs in another machine with 4 Cores 8GB, and it works properly without any change on cncli parameters.

1 Like