Running dual block producers to produce forks

With the current implementation you have to do the following:

  • Point your BP-Node to you Relay-Node by adding the relay ip:port in the topology of the BP-Node. This connection will stay active all the time, the BP-Node will stay on tip via this connection

  • Point your Relay-Node to your BP-Node by adding the bp ip:port in the topology of the Relay-Node. Lets say your BP-Node is listening on port 4001. Now you set up a firewall rule that disallows incoming connections to port 4001. You will see that your Relay-Node will try to connect to your BP-Node, but can’t establish a connection. Your BP-Node is not in “hot-standby” mode. As soon as you remove/disable that firewall rule and allow incoming connections to port 4001 on your BP-node, the Relay-node will connect within a few seconds and your backup BP-Node is now live on the chain and can distribute its blocks. With the active firewall the BP-Node just stays on tip and always synced. It will generate blocks, but noone will listen to them, so they don’t land onchain.

With future nodes that are also supporting P2P bidirectional connections, there will be other methods to enable/disable blockproduction on a node. Most likely via reloading config-files on the files triggered by signals without the need to restart the node.

1 Like

As this cannot be handled on the protocol level right now (there are discussions on making penalties for such a behavior), the past showed that handling it on the social media level works pretty well. SPOs will be called out very quickly on the socials, we had those situations in the past. :slight_smile:

@ATADA @brouwerQ @7.4d4 @Neo_Spank @HeptaSean @Triton-pool
thank you for help and sorry for making some troubles
It was made in purpose as I knew that it is allowed in cardano to run several block-producer to increase pool uptime and do not lost slots, I decided during migration to new infrastructure to hold old pools until migration of all pools will be completed

I also was told that network doesn’t accept block from not registered relays, but it seems wrong as now registered relays connected only to one BP


Running two active block producers isn’t best practice.

Who told you that or where did you find that info? And what would be the purpose of a non registered relay if blocks weren’t accepted? Also, people or organizations can run relay nodes without running a pool just to support the network. E.g. IOG now runs some relays to support the network. All Daedalus instances only connect to those relays at this moment by the way (unless you change it’s config to explicitly point to another node).

1 Like

you’re welcome

1 Like

We are all learning. Especially me. Sometimes there are things that can be done better another way. I appreciate @vitaly-p2p rectifying things quickly. I also respect him/her for creating an account on this forum and responding. Thanks.

I guess the main point is that causing chain forks is a type of sybil attack. Other staking protocols implement slashing but Cardano does not. Instead Cardano relies on its community shifting stake around in order to mitigate such sybil attacks.

Large pool operators get the privilege to make lots of blocks. Small pool operators absolutely cherish every block they get and it hurts if they get an orphan. They will analyse the reasons if they get an orphaned block.

Neverthelsss, Cardano’s non-custodial staking with no slashing is a winning feature because it unlocks many other advantages.

1 Like

Actually, it looks as though the problem is not fixed as advertised @vitaly-p2p

Of the blocks produced by “P2P Validator #3” this epoch, 4 out of 6 resulted in forks because your dual block producers produced different blocks. The last fork your pool produced was with your last block 2 hrs ago. So it is still happening.

It is not OK to do this. Please stop.

Wilful disregard that results in deliberate forks is a form of sybil attack. You now understand the problem so please stop doing it. The punishment method is community awareness and encouraging people to move their stake. This is not the proper way to run a stake pool in the best interests of the community.

1 Like

For anyone running cncli-sync it is easy to see which stake pools have produced chain forks by running dual block producers. Here is a SQL query you can run to get a list:

First use sqlite3 to connect to your fully synced cncli database.

sqlite3 cncli.db

Issue the following SQL command:

select c1.block_number, c1.slot_number, c1.pool_id, c1.hash, c2.hash from chain as c1 join chain as c2 on c1.block_number = c2.block_number and c1.slot_number = c2.slot_number and c1.pool_id = c2.pool_id and c1.hash < c2.hash and c1.slot_number >= 59356801 group by c1.block_number, c1.slot_number, c1.pool_id, c1.hash, c2.hash;

If you just cut and paste the SQL command above, once connected to cncli.db, you will get output like:

BlockNo|SlotNo  | Pool ID                                                | First Block Hash                                               | Second Block Hash


  1. The query pulls data from the beginning of epoch 335 since the where clause has “c1.slot_number >= 59356801”. Obviously just change this value to whatever time frame you are interested in. You can calculate the slot number for the start of an epoch on the command line with:
    epoch=335; firstslot="$(echo “(($epoch - 208) * 432000) + 4492800 + 1” | bc)"; echo “firstslot=$firstslot”
    (Change the 335 value to the epoch you want)
  2. The query will only return chain forks that have been received by your node. Every fork does not propagate to every node because when another pool adds a block the protocol will try to select the longest chain. Your individual node may receive the added to fork first and so it will reject the second, if it has not been added to yet. (Longest chain wins rule.)
  3. Cut and past the Pool ID values into to look up the pool and click on the block number to see what received. Pooltool is likely to receive evidence of more forks than your individual node. I have not yet seen a fork that my node received but pooltool did not.
  4. The group by clause in the SQL query just removes some duplication where the cncli database stores 2 copies of the exact same block (with same hash) for some reason. (These are not forks.)

The 4 pools listed in the query output above have all produced forks by running dual block producers this epoch (335). (Only the ones received by my node.) They are:

P2P Validator #3 (P2P3) (4ADA) (4ADA)

If you are the operator of one of these pools, then please stop running with dual block producers and act more professionally as a good pool operator would.

Update Following @HeptaSean comments below:

This post originally used a formula I calculated by working backwards from figures I already had and ended with constants for epoch 209 transition. However, the shelley hard fork transition was actually at start of epoch 208. The original formula did produce correct results with the different constants but it is better to have the proper constants for the shelley transition instant.

The above formula has been already amended so that it is using the actual shelley hard fork transition values as @HeptaSean pointed out. So if you are reading this post in the future the above formula and SQL query have been amended, so you can just cut and paste and it will be correct.


Just, because I have looked at how to do that calculation recently for other reasons:

Both calculations are obviously equivalent (except for the +1 in my use case). But I do like in my formula that the constants 208 and 4492800 actually are the beginning of Shelley.


yes, it makes more sense mathematically… since these are the slope & intercept of a line… to choose the beginning of Shelley as the origin (since the slope of any line before that would be different). :nerd_face:

1 Like


YOYOGI POOL (YYG) seems to be a regular offender this epoch. Out of its 6 blocks thus far, 3 have produced forks received at (4ADA) is still producing forks too.

Note that the only way two blocks with different hash values can be produced by the same pool for the same slot is by running dual block producers. It is not supposed to happen at all.

I think that any other pool operator that gets an orphaned block caused by one of these forks has a right to contact these pools and asked to be compensated for their lost block. I say this because what these pools are doing is wrong and it is deliberate. It is a form of sybil attack on the network and it is anti-Cardano.

They are attempting to increase their rewards while simultaneously reduce rewards for other pools deliberately.

Please bring attention to this thread so that the community becomes more awake to bad operators.


One thing that’s worked somewhat in the past is to get the attention of the IOG Marketing people who put together the SPO Digest that used to be emailed out monthly. I haven’t seen anything but the “Dev Digest” emailed so far this year… the last SPO newsletter I got was apparently combined with the Dev newsletter:

There also used to be “SPO Calls” which it says here have been suspended: I haven’t heard an announcement of any of these the last 4 months. I hope I’m wrong about this SPO newsletter & regular Zoom call not having taken place at all in 2022. But even if so, maybe issues like this would provide a justification for opening that forum again.

Just be aware they won’t bring up any issue that runs counter to the IOG’s commercial goals and marketing agenda… e.g. issues of stake centralisation that have been killing small pool operators since the beginning. It may also be that this chain vulnerability to malicious behaviour is not something IOG is willing to admit publicly, especially with the generally bad publicity surrounding “malicious forks.” :face_with_monocle:

1 Like

Could these pools be banned / penalised in wallet apps and cardano tools such as pooltool, adapools, etc.

1 Like

They can’t be banned from the Cardano protocol but they can be penalised.

pooltool and adapools can “de-rank” them based on this sort of bad behaviour.

One option would be to rank them right at the bottom for an extended period if they produce a duplicate block. Such an action would be no risk to any other pool operator because a normally run pool will never produce a duplicate block.

Pooltool already records when duplicate blocks are produced. Adapools could just do that simple SQL query above every epoch and implement the de-ranking with their current metrics. They don’t even need to give the pool operators an opportunity to provide an excuse because duplicate blocks can only happen by running dual block producers.

What I find interesting about metrics is that pools lose ranking points if they don’t provide any social media contact links. Whereas there can be good reasons for why a pool operator may wish to remain private. For example, they may not want to advertise their pledge for a “wrench attack”. If the rankings were solely based on pool metrics, including uptime statistics and lifetime luck (as a surrogate measure for missed blocks), then such pools would rank higher. Yet 4ADA is ranked 53 and YYG is ranked 295 and they are clearly not implementing the protocol properly.

1 Like

The last call on zoom was 2 September. Now, these take place on Discord. Three are held until now (Feb, Mar, Apr). The next one is Thursday: Discord.

The SPO Digest mail is also send each month, maybe you’re subscription got removed somehow, you could try to subscribe again: SPO Community of Communities #002.


There is a small chance for this to happen in a correct failover setup. When your main BP goes down, your failover takes over and your main BP comes back online just before your elected slot, before your failover sees that your main BP is back online.

You can’t know the uptime of the BP, only of the relays.

OK, but that is still a failure of the failover setup and running dual block producers. It shouldn’t happen. If it does then the failover setup needs a redesign.

It doesn’t matter what the uptime of the bad operators BP is. If their BP goes down that only affects them. What matters to the rest of the network is that their relays do their job well in helping to shuffle blocks around the Cardano network. Their relays are what is listed for other nodes to connect to and rely upon.

I don’t think this is a failure of the failover setup… If your main BP comes back online, your failover can’t know this instantly…

You connect to multiple relays and as long as there’re enough good working relays, there shouldn’t be a problem for the overall network.

thanks, I can see that it’s been coming out now (though maybe not every month, from the URL below)… I guess I just hadn’t been keeping it on file :crazy_face:

They have put items in highlighted sections before (as the result of back channel work with IOG Marketing, as in our discussion of key CIPs), and I think the issue here would be a good candidate for that. Perhaps if pushing a request on Discord where @benohanlon’s team is active would help achieve that. :face_with_monocle:

1 Like

Obviously YOYOGI POOL (YYG) does not care about running the protocol correctly.

It is still running dual block producers:

2 out of 3 blocks this epoch (so far) have produced forks.

1 Like