Please Share Your Grafana Dashboards For Cardano-Node

Hi,

I’m interested in trying out some cool Grafana dashboards made especially for monitoring Cardano-Node via Prometheus. If you have made some dashboards already and are willing to share, please reply to this message and post a link to where we can get the source code for it. Thanks!

4 Likes

Here is one I found from cardano-ops Github site:

This file uses “prometheus” (lower case p) as the datasource. If you have a problem first check to see if your prometheus datasource is named with a capital “P”.

2 Likes

This includes two dashboards - one from IOHK and other from @Umed_SKY

4 Likes

I’ve just shared mine [0] in grafana.com :slight_smile:

It’s k8s-oriented, so it expects to filter out metrics by namespace and pod. I’m using Loki/promtail for log collection, so it also includes some log panels filtered out by log message type.

[0] https://grafana.com/grafana/dashboards/12469

4 Likes

Hi.

Just a quick shoutout, If you would like to test or see our dashboard used in our pool JAWS & JAWSX. It was just released, you can download it from github (there’s some screenshots in the github aswell). Original by Umed [SKY], where our derivate is based on.

4 Likes

That’s a nice dashboard!

2 Likes

Thank you for sharing your dashboard @ada_jaws! Based on it, I’m able to create our own, which is for a less complex setup of only 3 separate servers - 1 core and 2 relays.

Sharing it here for anyone who have the same setup as ours at [PHRCK]. You can download it from [here].

3 Likes

Great work @nimrod. Great to see that you got it working, and thanks for sharing your work to the community.

2 Likes

Hi @nimrod,

very nice dashboard that you created here. I wanted to try it out by I am stuck with the “alias” part of your json file. Where do I have to define the alias=“relay1” etc?

Thanks and best regards
Tom

@Tom_ADADE, you’ll have to specify that in the prometheus yaml config file. Here’s how it appears in ours:

# global configs
global:
  scrape_interval:     15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).

scrape_configs:
  - job_name: 'cardano' # To scrape data from the running cardano-node
    scrape_interval: 15s
    static_configs:
      - targets: ['123.123.12.12:9999']
        labels:
          alias: 'relay1'
          type:  'cardano-node'
      - targets: ['123.123.12.13:9999']
        labels:
          alias: 'relay2'
          type:  'cardano-node'
      - targets: ['123.123.12.14:9999']
        labels:
          alias: 'core'
          type:  'cardano-node'

  - job_name: 'node' # To scrape data from a node exporter to monitor the linux host metrics.
    scrape_interval: 15s
    static_configs:
      - targets: ['123.123.12.12:9988']
        labels:
          alias: 'relay1'
          type:  'host-system'
      - targets: ['123.123.12.13:9988']
        labels:
          alias: 'relay2'
          type:  'host-system'
      - targets: ['123.123.12.14:9988']
        labels:
          alias: 'core'
          type:  'host-system'

That’s the content of the config file you provide as --config.file argument when running prometheus.

Thanks for the tip @nimrod related to Pooling pool data from adapools.

Here is what I come up with after tweaking the above JSON.

:+1: glad you got it working!