Processed TX is 0

Hi,
This has been raised before in other posts, I’ve checked those but dont think my issue is the same, I’ve tried to check my configuration based on the solutions provided with no luck.

I’ve recently setup my pool, and can see incoming and outgoing connections from my relay, plus 1 incoming/outgoing connection to the relay from the producer. However, up until now I havent seen the Processed Tx number increasing in both relay and producer. Here is what I’ve checked up until now:

  • The relay port is open to public.
  • The producer node is open to the relay (I’ve even disabled the firewall to see if there is something wrong with the configuration).
  • The relay IP and port is listed in the topology file at https://explorer.mainnet.cardano.org/relays/topology.json
  • The custom peers variable of my topology updater has both my BP and relays-new.cardano-mainnet.iohk.io:3001
  • The topologyUpdater response is { “resultcode”: “204”, “datetime”:“2020-12-21 14:25:06”, “clientIp”: “138.197.162.119”, “iptype”: 4, “msg”: “glad you’re staying with us” }
  • Both producer and relay are started with host adderss of 0.0.0.0 and port 3001

Any other things that might cause the processed transactions to be zero? I’ve noticed in the relay logs the following, but not sure if its a problem or how to go about fixing it:

[String “Failed to start all required subscriptions”,String “[52.2.141.66:6001,35.184.132.164:3001,52.37.4.48:3001,73.92.136.158:3003,18.133.104.22:6000,51.195.91.118:3001,139.59.146.37:7777,85.214.165.139:6000,78.46.181.123:3011,167.86.73.62:6001,86.126.80.36:3001,34.80.179.174:6000,40.82.212.203:6000]”,String “WithIPList SubscriptionTrace”,String “LocalAddresses {laIpv4 = Just 0.0.0.0:0, laIpv6 = Nothing, laUnix = Nothing}”]

I’ve checked a few of those ips and found that the associated port is open.

What else should I check?

Thanks!!

1 Like

Can you start ./gLiveView on your Relay and show me the outputs?

What is your pool ticker?

thx

Selection_048
The pool ticker is TASP

Thx, also can you put the output for nano configuration.json file?

Sure:

{
“ApplicationName”: “cardano-sl”,
“ApplicationVersion”: 1,
“ByronGenesisFile”: “mainnet-byron-genesis.json”,
“ByronGenesisHash”: “5f20df933584822601f9e3f8c024eb5eb252fe8cefb24d1317dc3d432e940ebb”,
“LastKnownBlockVersion-Alt”: 0,
“LastKnownBlockVersion-Major”: 3,
“LastKnownBlockVersion-Minor”: 0,
“MaxKnownMajorProtocolVersion”: 2,
“Protocol”: “Cardano”,
“RequiresNetworkMagic”: “RequiresNoMagic”,
“ShelleyGenesisFile”: “mainnet-shelley-genesis.json”,
“ShelleyGenesisHash”: “1a3be38bcbb7911969283716ad7aa550250226b76a61fc51cc9a9a35d9276d81”,
“TraceBlockFetchClient”: false,
“TraceBlockFetchDecisions”: true,
“TraceBlockFetchProtocol”: false,
“TraceBlockFetchProtocolSerialised”: false,
“TraceBlockFetchServer”: false,
“TraceChainDb”: true,
“TraceChainSyncBlockServer”: false,
“TraceChainSyncClient”: false,
“TraceChainSyncHeaderServer”: false,
“TraceChainSyncProtocol”: false,
“TraceDNSResolver”: true,
“TraceDNSSubscription”: true,
“TraceErrorPolicy”: false,
“TraceForge”: true,
“TraceHandshake”: false,
“TraceIpSubscription”: true,
“TraceLocalChainSyncProtocol”: false,
“TraceLocalErrorPolicy”: true,
“TraceLocalHandshake”: false,
“TraceLocalTxSubmissionProtocol”: false,
“TraceLocalTxSubmissionServer”: false,
“TraceMempool”: true,
“TraceMux”: false,
“TraceTxInbound”: false,
“TraceTxOutbound”: false,
“TraceTxSubmissionProtocol”: false,
“TracingVerbosity”: “NormalVerbosity”,
“TurnOnLogMetrics”: true,
“TurnOnLogging”: true,
“defaultBackends”: [
“KatipBK”
],
“defaultScribes”: [
[
“StdoutSK”,
“stdout”
]
],
“hasEKG”: 12788,
“hasPrometheus”: [
“127.0.0.1”,
12798
],
“minSeverity”: “Warning”,
“options”: {
“mapBackends”: {
“cardano.node-metrics”: [
“EKGViewBK”
],
“cardano.node.BlockFetchDecision.peers”: [
“EKGViewBK”
],
“cardano.node.ChainDB.metrics”: [
“EKGViewBK”
],
“cardano.node.Forge.metrics”: [
“EKGViewBK”
],
“cardano.node.metrics”: [
“EKGViewBK”
],
“cardano.node.resources”: [
“EKGViewBK”
]
},
“mapSubtrace”: {
#ekgview”: {
“contents”: [
[
{
“contents”: “cardano.epoch-validation.benchmark”,
“tag”: “Contains”
},
[
{
“contents”: “.monoclock.basic.”,
“tag”: “Contains”
}
]
],
[
{
“contents”: “cardano.epoch-validation.benchmark”,
“tag”: “Contains”
},
[
{
“contents”: “diff.RTS.cpuNs.timed.”,
“tag”: “Contains”
}
]
],
[
{
“contents”: “#ekgview.#aggregation.cardano.epoch-validation.benchmark”,
“tag”: “StartsWith”
},
[
{
“contents”: “diff.RTS.gcNum.timed.”,
“tag”: “Contains”
}
]
]
],
“subtrace”: “FilterTrace”
},
“benchmark”: {
“contents”: [
“GhcRtsStats”,
“MonotonicClock”
],
“subtrace”: “ObservableTrace”
},
“cardano.epoch-validation.utxo-stats”: {
“subtrace”: “NoTrace”
},
“cardano.node-metrics”: {
“subtrace”: “Neutral”
},
“cardano.node.metrics”: {
“subtrace”: “Neutral”
}
}
},
“rotation”: {
“rpKeepFilesNum”: 10,
“rpLogLimitBytes”: 5000000,
“rpMaxAgeHours”: 24
},
“setupBackends”: [
“KatipBK”
],
“setupScribes”: [
{
“scFormat”: “ScText”,
“scKind”: “StdoutSK”,
“scName”: “stdout”,
“scRotation”: null
}
]

Try this config…

https://charity-pool.ro/config.json

Bkp your config file first and then copy this one from my source… then restart your node.

MaxConcurrencyDeadline": 2,

  • 2 recomanded for BP and 4 for Relay if you have sufficient resources (if not leave it 2- default)

In your config file you have seted:

TraceLocalTxSubmissionProtocol”: false,
“TraceLocalTxSubmissionServer”: false,

  • need to be true

Be carefull at these 2 lines, if you have different path/name files you will need to edit

“ByronGenesisFile”: “/opt/cardano/cnode/files/byron-genesis.json”,
“ShelleyGenesisFile”: “/opt/cardano/cnode/files/genesis.json”,

1 Like

woot woot!

Thanks man much appreciated, I’ve spent way too much time trying to figure it out. Now, I do have to copy the same config to my BP correct? Its currently at zero processed while my relay seems to be kicking.

1 Like

Yes… you need to copy/edit on all nodes including BP

Folks,

I also don’t see processed/mempool Tx on the Relay

image

gLiveView uses these metrics

    tx_processed=$(jq '.cardano.node.metrics.txsProcessedNum.int.val //0' <<< "${data}")
    mempool_tx=$(jq '.cardano.node.metrics.txsInMempool.int.val //0' <<< "${data}")
    mempool_bytes=$(jq '.cardano.node.metrics.mempoolBytes.int.val //0' <<< "${data}")

which I don’t have …

core@cardano02:~$ PROM_HOST=127.0.0.1
core@cardano02:~$ PROM_PORT=12798
core@cardano02:~$ curl -s http://${PROM_HOST}:${PROM_PORT}/metrics | sort
cardano_node_ChainDB_metrics_blockNum_int 5127061
cardano_node_ChainDB_metrics_density_real 5.037783375314862e-2
cardano_node_ChainDB_metrics_epoch_int 237
cardano_node_ChainDB_metrics_slotInEpoch_int 395187
cardano_node_ChainDB_metrics_slotNum_int 17415987
cardano_node_metrics_Mem_resident_int 862674944
cardano_node_metrics_RTS_gcLiveBytes_int 580702416
cardano_node_metrics_RTS_gcMajorNum_int 19
cardano_node_metrics_RTS_gcMinorNum_int 467
cardano_node_metrics_RTS_gcticks_int 750
cardano_node_metrics_RTS_mutticks_int 1691
cardano_node_metrics_Stat_cputicks_int 2441
cardano_node_metrics_Stat_threads_int 11
cardano_node_metrics_nodeStartTime_int 1608981868
...

nor do they show up in the EGK

core@cardano02:~$ EKG_HOST=127.0.0.1
core@cardano02:~$ EKG_PORT=12788
core@cardano02:~$ curl -s -H 'Accept: application/json' http://${EKG_HOST}:${EKG_PORT}
{"cardano":{"node":{"metrics":{"Mem":{"resident":{"int":{"type":"g","val":862674944}}},"nodeStartTime":{"int":{"type":"g","val":1608981868}},"Stat":{"cputicks":{"int":{"type":"g","val":2469}},"threads":{"int":{"type":"g","val":11}}},"RTS":{"gcticks":{"int":{"type":"g","val":750}},"gcMinorNum":{"int":{"type":"g","val":468}},"gcMajorNum":{"int":{"type":"g","val":19}},"mutticks":{"int":{"type":"g","val":1719}},"gcLiveBytes":{"int":{"type":"g","val":581219920}}}},"ChainDB":{"metrics":{"blockNum":{"int":{"type":"g","val":5127065}},"slotInEpoch":{"int":{"type":"g","val":395250}},"slotNum":{"int":{"type":"g","val":17416050}},"density":{"real":{"type":"l","val":"5.037900874635569e-2"}},"epoch":{"int":{"type":"g","val":237}}}}}},"iohk-monitoring version":{"type":"l","val":"0.1.10.1"},"ekg":{"server_timestamp_ms":{"type":"c","val":1608982368645}},"rts":{"gc":{"gc_cpu_ms":{"type":"c","val":7506},"mutator_wall_ms":{"type":"c","val":499061},"mutator_cpu_ms":{"type":"c","val":17195},"gc_wall_ms":{"type":"c","val":3830},"wall_ms":{"type":"c","val":502890},"bytes_copied":{"type":"c","val":1924948120},"init_wall_ms":{"type":"c","val":1},"init_cpu_ms":{"type":"c","val":2},"max_bytes_used":{"type":"g","val":378445016},"max_bytes_slop":{"type":"g","val":5645200},"num_bytes_usage_samples":{"type":"c","val":19},"peak_megabytes_allocated":{"type":"g","val":778},"cpu_ms":{"type":"c","val":24701},"current_bytes_used":{"type":"g","val":581219920},"bytes_allocated":{"type":"c","val":13819042136},"par_max_bytes_copied":{"type":"g","val":1578579024},"current_bytes_slop":{"type":"g","val":8616368},"cumulative_bytes_used":{"type":"c","val":853551384},"num_gcs":{"type":"c","val":487},"par_tot_bytes_copied":{"type":"g","val":1924948120},"par_avg_bytes_copied":{"type":"g","val":1924948120}}}}

I thought, I had the relevant traces enabled …

  "TraceBlockFetchClient": false,
  "TraceBlockFetchDecisions": false,
  "TraceBlockFetchProtocol": false,
  "TraceBlockFetchProtocolSerialised": false,
  "TraceBlockFetchServer": false,
  "TraceChainDb": true,
  "TraceChainSyncBlockServer": false,
  "TraceChainSyncClient": false,
  "TraceChainSyncHeaderServer": false,
  "TraceChainSyncProtocol": false,
  "TraceDNSResolver": true,
  "TraceDNSSubscription": true,
  "TraceErrorPolicy": true,
  "TraceForge": true,
  "TraceHandshake": false,
  "TraceIpSubscription": true,
  "TraceLocalChainSyncProtocol": false,
  "TraceLocalErrorPolicy": true,
  "TraceLocalHandshake": false,
  "TraceLocalTxSubmissionProtocol": true,
  "TraceLocalTxSubmissionServer": true,
  "TraceMempool": true,
  "TraceMux": false,
  "TraceTxInbound": true,
  "TraceTxOutbound": true,
  "TraceTxSubmissionProtocol": true,
  "TracingVerbosity": "NormalVerbosity",
  "TurnOnLogMetrics": true,
  "TurnOnLogging": true,

Also strange, that I never see incoming connections from peers. The node is generally accessible however …

$ nc -zv 35.239.77.33 3001
Connection to 35.239.77.33 port 3001 [tcp/redwood-broker] succeeded!

I assume that when the Relay can’t see Txs, the BlockProducer won’t be able to produce blocks either.
Is the Tx issue perhaps related to the zero In peers?

So, are u running topologyupdater script on your relays?

Do you have peers in topology.json file?

Your relyas IPs are present here?
https://explorer.mainnet.cardano.org/relays/topology.json

Yes, my relay is there

$ curl -s https://explorer.mainnet.cardano.org/relays/topology.json | grep 35.239.77.33
      "addr": "35.239.77.33",

I’m not running the topologyupdater script because I thought it was outdated.

I’m wondering, whether I missed something in the config to the extend that the Tx metrics don’t show up. Here is the diff to the default config that I actually got from your node.

You will need to run topologyupdater once/hour otherwise your nodes will be declared offline (only on Relay nodes)

I checked ur diff, u should set to true…

Your problem is that you not ran the topologyupdater script… set in crontab or set as systemd

After 4 times (3 hours, restart your relay and you should see in peers and also tx proccesed)

I looked that config and copied the most pertinent entries. Everything related to Txs should be enabled

Is this this the updater script you are talking about?

1 Like

Yes, that is the script, if you are using cntools, you should have it in scripts folder…

#CUSTOM_PEERS=“None”

should be edited like this

CUSTOM_PEERS="Your_BP_IP:port”

Ok, I got the “nice to meet you”

{ "resultcode": "201", "datetime":"2020-12-26 12:17:32", "clientIp": "35.239.77.33", "iptype": 4, "msg": "nice to meet you" }

… and we assume that after a while we’ll be seeing incoming peers?

What about, those missing txs metrics - would you have an idea what goes wrong?

Something must be preventing these metrics - under the hood it may all be working fine. I’m still worried that it is not. My pool ASTOR is supposed to be participating in the next epoch.

When a fresh Relay node is started, is that supposed to show numbers for these Tx metrics?

3 times you will see nice 2 met you… 4th time you should see glade to stay with us(this is the desired message; you must NOT stop the topologyupdater script after )…

Your issue will be fixed in 3 hours…

When a fresh Relay node is started, is that supposed to show numbers for these Tx metrics?
First the node should be 100% sync, after that the topologyupdater should run for 4 times and after that you should see tx incrementing…

Also review your BP

… and the updater should run in a cron I suppose. How often should it run? Only on the relay, right?

Unfortunately this is not document here

Yes, only on relay and set up in crontab to run once / hour

You can find how to here:

This doc talks about the testnet not having a P2P network module

Since the test network has to get along without the P2P network module for the time being, it needs static topology files.

and if I understand correctly, you say that this is also true for 1.24.2 mainet?

Yes… it’s still valid also for mainnet