Active Stake 0

Hello everyone,

I hope you’re all doing well. I recently noticed that my Active Stake is at 0 on cexplorer, and I don’t understand why. It seems to be the good value on and
Has anyone else experienced this issue or have any insights into what might be causing it? I’m open to any information or suggestions that could help me understand and resolve this situation.

Ticker : ISH
Name : IndieStakeHub

Thanks a lot !

For information :

cardano-cli query leadership-schedule \
--mainnet \
--genesis $NODE_HOME/shelley-genesis.json \
--stake-pool-id $(cat $NODE_HOME/stakepoolid.txt) \
--vrf-signing-key-file $NODE_HOME/vrf.skey \

return no error.

I saw in this post that without any stake I should get Command failed: query leadership-schedule Error: The stake pool: "...pool_id.." has no stake
Command failed: query leadership-schedule Error: The stake pool: “…pool_id…” has no stake - Staking & Delegation / Setup a Stake Pool - Cardano Forum

However, no leadership expected :

SlotNo                          UTC Time

I also audited my node configuration using Auditing Your nodes configuration - CoinCashew
No problem detected.


It needs 2 snapshots for live stake to became active


Hi Alex,

First delegators came at Epoch 445, so active stake should not be 0 since Epoch 447.
Furthemore, there is just Cexplorer which show 0 active stake.
I just wanna be sure that it’s a Cexplorer problem and not a configuration issue on my side.

CExplorer :

pooltool :

adastat :

Thanks for your time.

It’s like if I redelegate to myself …


Check next epoch

It’s just CExplorer and even CExplorer shows the same live stake and that you have fulfilled your pledge. Quite surely nothing to worry about.

Ok Thanks ! I’ll see in next epoch if problem solved.
However I have no TX processed. I’m looking for what’s happen but no links I guess. Any idea ?

TraceMemPool is true, TraceMux too

I already ran topologyupdater and I have peers in topology.json file. 25 peers in relay node and 2 in BP.

I got 407 total TX then Pending TX return to 0.
Is it normal ?

Its normal… as long tx is incrementing

Capture d’écran 2023-11-14 à 06.05.43

Is it possible there is a link with this error ?
cardano.node.IpSubscription:Error:208 *********** Failed to start all required subscriptions
Restarting Subscription after 1.001310173s desired valency 14 current valency 13

I have errors for some relay : exception: Network.Socket.connect: <socket: 81>: timeout (Connection timed out)

FYI : CUSTOM_PEERS is not defined in topologyUpdater.

Anyone ?
Always the same. Sometime tx increases for few minutes and return to 0 for hours …


Its fine, you have only 5 IN peers (probably 1 is the BP). When you will have more IN peers (in time) you will process more TXs


FYI there was a problem.
The topology update script used the server’s v6 IP and not the v4 IP.
I modified and to force the use of ip v4 :

curl -4 -s “${CNODE_PORT}&blockNo=${blockNo}&valency=${CNODE_VALENCY}&magic=${NWMAGIC}${T_HOSTNAME}” | tee -a $CNODE_LOG_DIR/topologyUpdater_lastresult.json

After 4h (4 execution of the new topology registration), everything is OK.

I’ll take this opportunity to share a script I created during my troubleshooting to check that I’m accessing all the relays in the topology:

Make sure you have the jq program installed on your system, as it is used here to extract IP addresses and ports from the JSON file.

sudo nano

Add lines bellow :



# Parse JSON and iterate through Producers
ip_addresses=($(jq -r '.Producers[].addr' "$filename"))
ports=($(jq -r '.Producers[].port' "$filename"))

for ((i=0;i<${#ip_addresses[@]};++i)); do
  echo "Testing connection to ${ip_addresses[i]}:${ports[i]}..."
  timeout 3 bash -c "</dev/tcp/${ip_addresses[i]}/${ports[i]}" && echo "Connection successful" || echo "Connection failed"
  echo "-----------------------------"

Save this script in a file, give it execution permissions and then run it:

chmod +x

This will give you information on the success or failure of the connection to each IP address and port specified in the topology.json file.