Anyone familiar with db_sync? Having issues since node update to 1.35.5

Hello -

My db_sync stopped syncing since I updated my node to 1.35.5. In the logs, here is the error I am seeing:

tail -10f $CNODE_HOME/logs/dbsync.json
{“app”:,“at”:“2023-02-02T14:32:49.86Z”,“data”:{“kind”:“LogMessage”,“message”:“Checked 5000000 Datum”},“env”:“:00000”,“host”:“”,“loc”:null,“msg”:“”,“ns”:[“db-sync-node”],“pid”:“540151”,“sev”:“Info”,“thread”:“54”}
{“app”:,“at”:“2023-02-02T14:34:46.56Z”,“data”:{“kind”:“LogMessage”,“message”:“Checked 6000000 Datum”},“env”:“:00000”,“host”:“”,“loc”:null,“msg”:“”,“ns”:[“db-sync-node”],“pid”:“540151”,“sev”:“Info”,“thread”:“54”}
{“app”:,“at”:“2023-02-02T14:36:47.51Z”,“data”:{“kind”:“LogMessage”,“message”:“Checked 7000000 Datum”},“env”:“:00000”,“host”:“”,“loc”:null,“msg”:“”,“ns”:[“db-sync-node”],“pid”:“540151”,“sev”:“Info”,“thread”:“54”}
{“app”:,“at”:“2023-02-02T14:37:51.96Z”,“data”:{“kind”:“LogMessage”,“message”:“Found 688238 Datum with mismatch between bytes and hash.”},“env”:“:00000”,“host”:“”,“loc”:null,“msg”:“”,“ns”:[“db-sync-node”],“pid”:“540151”,“sev”:“Info”,“thread”:“54”}
{“app”:,“at”:“2023-02-02T14:37:51.96Z”,“data”:{“kind”:“LogMessage”,“message”:“Trying to find RedeemerData with wrong bytes”},“env”:“:00000”,“host”:“”,“loc”:null,“msg”:“”,“ns”:[“db-sync-node”],“pid”:“540151”,“sev”:“Info”,“thread”:“54”}
{“app”:,“at”:“2023-02-02T14:37:52.35Z”,“data”:{“kind”:“LogMessage”,“message”:“There are 82315 RedeemerData. Need to scan them all.”},“env”:“:00000”,“host”:“”,“loc”:null,“msg”:“”,“ns”:[“db-sync-node”],“pid”:“540151”,“sev”:“Info”,“thread”:“54”}
{“app”:,“at”:“2023-02-02T14:37:56.00Z”,“data”:{“kind”:“LogMessage”,“message”:“Found some wrong values already. The oldest ones are (hash, bytes): [("b3d5894a64d4b5718c2a1b0c32ba302319b983647f9c2928c115d94c6e63ab72","d87b80"),("883f093b3639a82c7a6aefbc79aedcb7d8a2d561f772d5ab2fe9431a34c5f7a3","d87c80"),("f7f2f57c58b5e4872201ab678928b0d63935e82d022d385e1bad5bfe347e89d8","d87980"),("bade4f39af999eb49aca23e1c12f5c5bf1a16ebf63a85f7b9e17b4ff3d82886a","d87a80"),("ccb127aef877cfc0abeb3633875fec21dfd357a2319808a634e9c7cd1164b4a2","00")]”},“env”:“:00000”,“host”:“”,“loc”:null,“msg”:“”,“ns”:[“db-sync-node”],“pid”:“540151”,“sev”:“Info”,“thread”:“54”}
{“app”:,“at”:“2023-02-02T14:37:56.00Z”,“data”:{“kind”:“LogMessage”,“message”:“Found 238 RedeemerData with mismatch between bytes and hash.”},“env”:“:00000”,“host”:“”,“loc”:null,“msg”:“”,“ns”:[“db-sync-node”],“pid”:“540151”,“sev”:“Info”,“thread”:“54”}
{“app”:,“at”:“2023-02-02T14:37:56.00Z”,“data”:{“kind”:“LogMessage”,“message”:“Starting chainsync to fix Plutus Data. This will update database values in tables datum and redeemer_data.”},“env”:“:00000”,“host”:“”,“loc”:null,“msg”:“”,“ns”:[“db-sync-node”],“pid”:“540151”,“sev”:“Info”,“thread”:“54”}
{“app”:,“at”:“2023-02-02T14:37:56.00Z”,“data”:{“kind”:“LogMessage”,“message”:“Starting fixing Plutus Data At (Block {blockPointSlot = SlotNo 54274449, blockPointHash = 7ba32047219ac6c216c907c6be942e7558a6e19db1304d089db974872f29faed})”},“env”:“:00000”,“host”:“”,“loc”:null,“msg”:“”,“ns”:[“db-sync-node”],“pid”:“540151”,“sev”:“Info”,“thread”:“54”}

Any ideas?

Hi,

did u read this?

https://cardano-community.github.io/guild-operators/upgrade/

Cheers,

1 Like

Aha! I did not… :frowning: Thank you as always, @Alexd1985

1 Like

@Alexd1985 Same issue here, seems like cardano-db-sync is taking forever to upgrade :frowning:

stopped dbsync
recreated database with the postgresql script of the cardano-db-sync github repo
I downloaded the last snapshot,
restored it,
but now the dbsync isnt syncing…
it seems to be doing that according to logs : “Starting chainsync to fix Plutus Data. This will update database values in tables datum and redeemer_data.”
and now after more than 6 hours it saying that it fixed 400003 plutus data, I’m not sure how much longer it will take, is it a normal behavior ? should I just wait ? what would you do ? thanks in advance !

ohh, yeah… I waited 1 week for db sync to sync from 0…
if the services are running then have patience… dbsync is so… sensitive :smiley:

were you stuck as well in the chainsync to fix plutus data ?

I don’t know… I did not monitor the process… :smiley:

Patience is definitely a virtue that I will have to work on as I deal with those tools :sweat_smile: Thank you for your answer ! :grin:

1 Like

UPDATE : SO IT IS FINALLY SYNCING !
iirc what I did is I made sure to restart the dbsync service, get the snapshot and restore it and I also made sure all the stuff was optimized on the pgsql config file as suggested in the guild guide. the script where you recreate the db so using --recreatedb instead of createdb when you get to this step in the guild guide is what I believe saved the day. And THEN being patient.

I was able to reassure myself that good stuff was happening and dbsync was doing its stuff by checking the logs using this command:

tail -100f $CNODE_HOME/logs/dbsync.json

I was also able to know that stuff was happening by checking in the psql cexplorer terminal the result of

cexplorer=# SELECT pid, datname, usename, query FROM pg_stat_activity;

which was giving me clues about dbsync doing stuff which was alternating between checking datums updating them and Commiting to the db.

few other important details to keep in mind : make sure the cnode service is started with systemctl status cnode.service and same thing with dbsync by checking systemctl status dbsync.service and of course make sure the dbsync service is stopped while you do the recreatedb and snapshot restore manipulations. good luck to anyone going through this hell

1 Like

To check the Sync progress of db-sync

To get a rough estimate of how close to fully synced the database is, we can use the time stamps on the blocks as follows:

select
   100 * (extract (epoch from (max (time) at time zone 'UTC')) - extract (epoch from (min (time) at time zone 'UTC')))
      / (extract (epoch from (now () at time zone 'UTC')) - extract (epoch from (min (time) at time zone 'UTC')))
  as sync_percent
  from block ;
1 Like

yessir, I also use this one to do something similar

select now () - max (time) as behind_by from block ;
1 Like