For those who have been following my updates, you’ll know that recently I’ve been spending a lot of time designing, building, and testing the [CRAB] high-availability stake pool. As part of that, I’ve repackaged Cardano Node independently, as an opinionated alternative to the official container images, testing and running it in a highly-available infrastructure I originally designed for Isoxya. Today, I’m pleased to announce that I’m releasing the Docker image code open-source, as well as publishing pre-compiled container images to the Docker Hub registry. These images should be suitable both for development and production installs. Kindly note the disclaimers in the licence and documentation.
Cardano Node Docker open-source release (independent, unofficial)
This is a lot of work to make available to the community and I’m sure I’m not the only one who appreciates that. There is just one statement in the documentation I believe should be amended:
It is likely a matter of time before someone exploits a stake pool by offering malicious images.
Given the high profile case here (which broke everyone’s hearts), already 3 weeks old, which was itself based on an already known means of injecting cryptocurrency harvesting malware into a docker image, maybe the threat could be characterised more broadly… as something that has happened before and will happen again unless people are willing to follow strict protocols about the sources of docker images & scripts?
Good suggestion. I’m happy to accept a suitably-worded PR.
One clarification which is important, though, is from my understanding of that situation, the sad attack which happened wasn’t caused by injection into a cryptocurrency image, but rather an exploit from an unsecured Docker Engine port. That’s a different attack vector and set of risks, but certainly worth warning about. It’s also worth noting that that precise style of attack wouldn’t’ve been possible if using containers but an alternative orchestrator or host—e.g. I mentioned Docker because people are mostly familiar with that, but I’m not actually using Docker at all, and that type of attack isn’t even possible.
The main thing I had in mind when writing that documentation, however, was the risk of someone publishing a malicious Cardano Node container image, and someone installing it without realising it was malicious or had been compromised. This is a similar risk to accepting pre-compiled binaries from anyone else, and this style of attack has different risks and mitigations.
Happy to accept a suitably-worded PR, as I say—but it needs to describe the attack which sadly happened to a member of our community accurately. i.e. as far as I’m aware—and someone please correct me if I misunderstood or am not up-to-date—it was neither a malicious injection into a container image itself, nor a compromised or malicious image, but rather, an unsecured port which allowed root privilege takeover given the nature of how Docker Engine operates out-the-box.
I think it will be accurate if you insert the word “Cardano” in the disclaimer, maybe also broadening the term “stake pool” to anyone who might use
It is likely a matter of time before someone exploits a Cardano node by offering malicious images.
The third party link above, which Jun also quoted in his own post, refers to an already known exploit using pre-built container images to harvest cryptocurrencies in general ( The Observed Attack in Detail > case #1, A dedicated image built by a 3rd party), so it is indeed applicable here.
I am sympathising with this issue based on work on a parallel problem: I found I couldn’t go ahead with my offline use of the Cardano node unless I built an OS environment from scratch, but didn’t have a second system to do it on. I came up with a template & instructions for building a persistent Ubuntu USB with the Cardano software. The instructions are lengthy, which will leave many people wanting a customised OS image that they can just download.
But how in general would a newcomer know any of these images could be trusted, especially as the number of them out there increases? As a result I’m planning on leaving it as a DIY guide indefinitely, because it forces people to use known clean sources & in full consciousness of the risks of doing otherwise… not from us of course, but from the malevolent actors whose work would blend in more easily if there were too many pre-built images in the wild.
That’s the main reason I wanted to open up a discussion… about the use of images in general, rather than talking about any specific software vulnerability.
You raise excellent points, and I’m not sure there is any simple answer to your questions. At what point, and who, do you trust? Do you trust random uploads from somebody online? How about somebody with a provable background you can research, but you still don’t know at all? How about somebody you do know, say you’ve worked with them before, but you are not close and you don’t know what’s going on in their life now? How about IOHK—do you trust them, and the images they publish? Do you vet the code, which is vast, even if you compile from source? What about third-party libraries? Do you vet the Haskell Cabal dependencies? What about the OS or base sources? Do you trust them if they are an official Linux distribution, such as CentOS or Debian? Do you checksum all the downloads from those? Do you import and use GPG keys to verify the checksums? Do you check the owner of the package-signing GPG keys? Do you verify things like the exploit on Debian years ago, where likely-mistakenly private keys were crippled making them hugely less secure? How do you download these images? Do you use a system you installed yourself? Do you share that system with anyone? Do you ever leave it unattended? Do you use an image on a USB stick? Do you wipe it and reinstall it every time, since a USB stick is mutable? Or do you keep it on you at all times? Do you check for hardware vulnerabilities in the firmware of the USB stick or hard drive?
And so on. These topics are hugely problematic, and I don’t think there are any clear answers, especially as many are not only technological but also philosophical and matters of personal behaviour, trust, threat models, etc. For what it’s worth, I considered not pushing actual images themselves to the registry for exactly the reason you describe. I’m open to reassessing that decision in future, and moving them back to my internal trusted registry, if there’s a clear signal from the community at large that it’s part of a coordinated approach to reduce risk within the community. But even that wouldn’t solve many potential issues.
I’ve updated the wording in the Git repository. What do you think: any better?