My node is since 12h on status “syncing 100%”. My other node is working fine. I have also already tried do restart the service. But the result is always the same.
Your server time might be out of sync. Just a wild guess. I had the same issue until I fixed the server time updating correctly.
i have checked the time and it´s synced.
the same value I have on the bp and on the second relay. which one are all working without any problems. Im running the system on some raspberry pi 4 8gb ram
ok, then… check the time syncronization as @Johann_ADAholycs suggested
i have checked it with timedatectl status…and they show my all the current time.
it always 400 slot behind the other node
I do not get the problem yet, but maybe the old school chrony helps.
In have now fount that I get a lot of this error:
Error: Switchboard’s queue full, dropping log items!
Mar 08 14:04:03 raspberrypi bash: Error: Switchboard’s queue full, dropping log items!
I have set in the config file the TraceMempool = false . After that is was again syncing normaly. What I don’t unterstand is why that happened?
maybe the node was getting overloaded? a saw that tracemempool = true could lead to many copies of the same transactions… overloading less resourceful nodes
Interesting coincidence but seems unlikely - came here with the same issue. 3 relay nodes, 2 stuck syncing at 100% after having been running fine for weeks. No changes to the host but see the same switchboard messages and time checks are ok (using UTC for sanity). The good one is the newest, added over the weekend.
it could possibly be more of a fundamental issue then, the fact there is more than one of the same transaction
I disabled TraceMemPool (set to false) and restarted the node. gLiveView shows it’s synced but status information lost.
Its clone, also Pi 4 with 8gb, running with good free overhead on cpu and memory. Lower, tracemempool disabled, upper enabled.
So now wondering what having TraceMempool false disable.
So did find someone added an issue on github asking for more clarification and pointed out these duplicate transactions.
The problem: We don’t have nearly enough documentation to judge the implications of doing this. I was unhappy about turning off Tx tracing and decided to do it one the most affected of our two relays (showing the most duplicates & the highest load average). Then last epoch our second relay also began showing the floods of duplicate transactions, with the same increase in CPU load. From watching this for a few months (see forum) I can see that these duplicates are becoming more prevalent in the Cardano network.
So not sure if should change that yet, I did see it stay sync’ed today for like 20 min, and bounced one of my relays and seemed to gotten some things going. Did not like I did not understand what was exactly happening.
Since the deactivating of the traceMempool the node is now running without problems
I see what you mean now, from the github issue and your screenshot, then doing it myself the “Processed TX” and “Mempool TX/Bytes” numbers go away. I can see why this would not be desired, but if it helps performance. I think I can live with having one relay with it off. Thanks all.