Too many Missed Slots happens, help me

Dear Experts
I set up the pool by CNTOOLS two epochs before and got first blocks from on 291 epoch. My system composed with 1 BP and 2REALYS, all are 6core and 16G RAM memory, all of which are using Condora VPS service. I experienced only one epoch and encountered the “Missed slot leader checks” problem as attached captured image while I was checking the Guild Liveview.
You see too much “missed slots” in the status image. I am not sure the reason why…
Would you experts help me and explain to me what the problem is and what I should do, Please.
missed slots

1 Like

do u see the missed slots incrementing during the epoch or during the epochs change?
If you see only during the epochs changing then it’s fine (it will be fixed in the future)

Well I just came checking for this myself, up to 290 I had every assigned block successfully minted, but since 291 I missed the only block I had, and in 292 I have 0 blocks assigned, still I get
image while checking in gLiveView it went from 549 to 550 11,7% into the curren epoch…
I’m on contabo, upgraded to 16Gb/6Core the BP and 2 other relays with same specs…
I failed to debug the missed slot las epoch since I can’t get prom/grafana working yet

check inside the logs why you missed the block assigned for 291, did u rotated the keys, did u performed any actions which could lead to lose the block?

I’m embarrassed to say I do not know how to check them logs in the console, I couldn’t find a tutorial and I’m at a loss… What I can tell you is that by 291 I had 8Gb in the BP (wich also had in 290 when I successfully minted 3 blocks) and since then I upgraded to 16gb. Keys have 30+ days left… other than a possible low memory problem nothing changed since the first 75 blocks

then yes, can be related with RAM, but now it has 16G right?

Correct, I assumed I was “punished” for missing a block in 291 with 0 blocks this epoch, I can live with that, but then seeing those “missed slot leader checks” not being 0% makes me worry a lot…

but now, still incrementing?

Not since the first post. I recall after the missed slot event being higher, then dropping to 0,17% and today is at 0,2170% when I posted above

Honestly I don’t recall that “Missed slot leader” part in gLiveView before, but then again never missed a block before…

how many relays do you have?

2 in total , 1 contabo vps with 16gb ram on another location than the bp, and 1 baremetal 12gb 8core right here in my office.

great, try to remove one by one (relays) from BP topology and restart the BP… monitor for few hours and check with which relay the missed slots are incrementing

if with only Bare Metal Relay missed slots are incrementing… then keep only the remote relay and stop the Bare Metal Relay - can use a lot of bandwidth, perhaps you have an issue with the home/office network…

1 Like

Hang on, the BP is not showing an outbound connection from the contabo relay only ingoing… so it’s only outing through the bare metal one that’s here in Argentina, so it has humongous latency… maybe that’s where its comming from… I’ll recheck topology.json and fw rules

1 Like

it’s all there… both FW and topology is correctly set, yet no outbound to the relay only inbound
where else could it be that I’m missing the BP ip address in the node? any conf file?

scratch that I had a dns name missmatch in the topology.json at the bp, let’s see after a restart.

1 Like

image
2/2 let’s see how it goes from here on out…

could 0 leaderslots be a punishment for missing a block or that doesn’t count at all?

Nope, if u had a block and missed it it was missed for a reason… what is the ticker, also what was the slot number from cncli leaserlog?

Argie is the ticker, and how do I check past leaderlogs positions?

I had an epoch with 0 blocks in the past when I had IOG delegation, so epoch 292 should give u more blocks :wink:

let’s wait for the next epoch; meantime keep monitoring the missed slots

1 Like

@Alexd1985 You’re the best. As always, Thanks!
@SouthKoreaLee sorry for thread hijacking, let’s see if we can helo you too.