Relay node in docker

Hi there,
I started moving my nodes into docker. I’m using image from:

I encountered two problems during the setup process:

  1. The first one is related to topology file. My docker-compose file looks like:
version: '3.7'

    image: cardanocommunity/cardano-node:latest
    container_name: cardano-relay-2
    restart: unless-stopped
      - "6067:6067"
      - /opt/cardano/cnode/scripts/env:/opt/cardano/cnode/scripts/env
      - /opt/cardano/cnode/sockets:/opt/cardano/cnode/sockets
      - /opt/cardano/cnode/db:/opt/cardano/cnode/db
      - /opt/cardano/cnode/files:/opt/cardano/cnode/files
      - /opt/cardano/cnode/scripts/
      - NETWORK=mainnet looks like:

# User Variables - Change as desired #

CNODE_HOSTNAME=""  # (Optional) Must resolve to the IP you are requesting from
CNODE_VALENCY=1             # (Optional) for multi-IP hostnames
MAX_PEERS=10                # Maximum number of peers to return on successful fetch (note that a single peer may include valency of up to 3)
CUSTOM_PEERS=",6000"        # *Additional* custom peers to (IP,port[,valency]) to add to your target topology.json
                            # eg: ",3001|,3002|,3003,3"
#BATCH_AUTO_UPDATE=N        # Set to Y to automatically update the script if a new version is available without user interaction

When I enter the container by running:

docker exec -it node bash

and then run, the new topology file is created but it doesn’t contain parameters for my BP node. How can I connect the relay node to the BP node? Is it alright if I add the BP node details into the as a custom peer?

  1. After the topology file is updated. I have to restart the node to reload topology file. But then script is doing topology file back up and my topology file again looks like:
  "Producers": [
      "addr": "",
      "port": 3001,
      "valency": 2

On my BP node I managed to solve this problem by adding extra volume into my docker-compose file:

      - /opt/cardano/cnode/files/topology.json:/opt/cardano/cnode/files/topology.json

When I tried the same on the relay node, when I run the error pops up:

‘topology.json’: Device or resource busy

How can I restart node and load the topology file which was created before?

Thanks for any help :slight_smile:


I use official cardano-node docker image, I personally think it is better then other unofficial images.

I think better way is to go with docker-host network, then with default docker network.

I’m using p2p now, but when I was using topology updater I was running that script from docker host, I think it will makes some things much easier. Have you considered to run topologyupdater from host, not from container?

I think you need to have volumes pointed to directories not to several files, as you have, I think it is more cleaner way to do it.

that was my docker-compose file, what I used some time ago:

version: "3.5"

    container_name: test-relay1
    network_mode: "host"
    build: ./
    restart: always
      - ./node-db:/data/db
      - ./node-config:/config
      - ./node-ipc:/ipc
      driver: "json-file"
        max-size: 50m
        max-file: "100"

Topology updater script were in ./node-config directory with all configurations, so from host when I was triggering it and if needed it will write all config in same directory and then I just needed to restart test-relay container and it will start using new config.


I don’t think your script is correct. I personally used ones from coincashew:

1 Like

thanks for your reply! So I did what you said and I used an official docker image. Also I did tweak little bit the script and now I can run it without any issues from the host :smiley:

Can you tell me why you think it’s better to run container in the host network mode? Also do you use gLiveView?


version: "3.5"

    image: inputoutput/cardano-node:latest
    container_name: cardano-relay-2
    restart: unless-stopped
      - PORT:PORT
      - ./db:/data/db
      - ./node-config:/opt/cardano/config
      - ./node-ipc:/ipc
      - ./logs:/opt/cardano/logs
      driver: "json-file"
        max-size: 50m
        max-file: "100"
      - NETWORK=mainnet
      - CARDANO_NODE_SOCKET_PATH=/ipc/node.socket

host env variables:

export NODE_CONFIG=mainnet
export CNODE_CONFIG_DIR=$HOME/node-config
export CCLI='docker exec -it cardano-relay-2 /bin/cardano-cli'

# shellcheck disable=SC2086,SC2034

CNODE_PORT=PORT # must match your relay node port as set in the startup command
CNODE_HOSTNAME=""  # optional. must resolve to the IP you are requesting from
NETWORKID=$(jq -r .networkId $GENESIS_JSON)
CNODE_VALENCY=1   # optional for multi-IP hostnames
NWMAGIC=$(jq -r .networkMagic < $GENESIS_JSON)
[[ "${NETWORKID}" = "Mainnet" ]] && HASH_IDENTIFIER="--mainnet" || HASH_IDENTIFIER="--testnet-magic ${NWMAGIC}"
[[ "${NWMAGIC}" = "764824073" ]] && NETWORK_IDENTIFIER="--mainnet" || NETWORK_IDENTIFIER="--testnet-magic ${NWMAGIC}"

export CARDANO_NODE_SOCKET_PATH="${CNODE_HOME}/node-ipc/node.socket"

blockNo=$(${CNODE_BIN} query tip ${NETWORK_IDENTIFIER} | jq -r .block )

# Note:
# if you run your node in IPv4/IPv6 dual stack network configuration and want announced the
# IPv4 address only please add the -4 parameter to the curl command below  (curl -4 -s ...)
if [ "${CNODE_HOSTNAME}" != "CHANGE ME" ]; then

if [ ! -d ${CNODE_LOG_DIR} ]; then
  mkdir -p ${CNODE_LOG_DIR};

curl -4 -s "${CNODE_PORT}&blockNo=${blockNo}&valency=${CNODE_VALENCY}&magic=${NWMAGIC}${T_HOSTNAME}" | tee -a $CNODE_LOG_DIR/topologyUpdater_lastresult.json


I do not use gLiveView, because I was not able to make it work with official cardano node docker image. :slight_smile:

Nevertheless, I got at the point when everything what I need I have in grafana and most of data in gLiveView is from those metrics, per my limited understanding. Additionally seems gLiveView has some performance impact on system, what I do not like.

I usually always run my containers in host mode. In that case your docker container is switched into host ethernet. Basically it means that your docker container is sharing same IP as host.

There are some advantages: you don’t need to expose each port separately. You don’t need rely on docker to manage your FW rules(I often disallow docker to touch my iptables rules at all, what will not be very useful with default docker networking). For example if you use default docker network, docker will put all FW rules by it self(for port binding), first of all that sometimes it do not work really well(like huge range of ports user land proxy uses all RAM memory when exposing a big range of ports · Issue #11185 · moby/moby · GitHub), then if you secure your box with iptables, then all these docker manipulation of iptables will make a lot of mess, at least for my taste.

Probably disadvantage is that you must to understand that all ports are being exposed from your docker container, so you must FW your docker host properly and you must understand what ports are being exposed by your docker container. Additionally in host mode you can’t run 2 same containers using same port, with default docker network you can do it, just bind it on different ports(80:8080 and 81:8080 for example). And generally I find it much easier to understand what is going on with networking if I just have my docker container using same ip, otherwise you need to keep track of iptables rules, docker rules for port binding, some other FW if you have and etc.

For some of my host I do use default docker network(for example OpenVPN server), because I do not restart or reconfigure it often, maybe only at start time and it usually is running without stop for years.

1 Like

thanks for your super comprehensive comment :smiley: I thought about it again and decided to go with your advice and run the image in the host mode.

Do you use VPN tunneling to connect to your BP node?

No I don’t use VPN for accessing my BPs, I think it is a bit too much.

1 Like

One more question. Do you know how to control port for node in host mode? I found some env variables in docker image but they are not working.



Hi @rere-dnaw

I do not use host variables at all. I just put in node-config directory config file and run docker with proper command line.

If you remember my docker-compose, to make it work you need to have Docker file with something similar content:

FROM inputoutput/cardano-node:1.34.1
ENTRYPOINT ["/usr/local/bin/cardano-node", "run", "+RTS", "-N", "-A16m", "-qg", "-qb", "-RTS", "--topology", "/config/testnet-topology.json", "--database-path", "/data/db", "--socket-path", "/ipc/node.socket", "--host-addr", "", "--port", "6001", "--config", "/config/testnet-config.json" ]
1 Like

ok, I get it. Thanks!