Haskell 8.6.5. appears to have installed but Im hitting a wall with making cabal-install 3.2.0.0. OS is Ubuntu 20.04 and as there is no available cabal-install 3.2.0.0. for aarch64, I am trying to compile cabal-install locally. The README.md says to use ./bootstrap.sh, but this fails to detect llvm 6, however I have both llvm-6.0 and llvm-6.0-dev insalled from apt.
The preferred nix installation also fails, the nix installer appears to complete but on trying to open a nix shell there are files that have not generated.
I appreciate this is not Cardano specific, but getting the Haskell environment completed is a requirement for node building, so hope someone can help.
Hey, I also want to build a node on a Raspberry Pi 4. But today I came across some posts telling that there is a problem with GHC 8.6.5, which is not supported on ARM architectures. Some said cross-compiling could work. I found these links:
I will try it out as soon as my Raspberry arrives. Should be in a couple days.
to compile the right version of Cabal and GHC on Raspberry Pi 4b, you can refer yourself to the dockerfiles i’ve put together on Github, it might help you to get started.
Even if you do not make use of Docker, you can easily follow the steps described in it and try the same work around on your system if your running Ubuntu on your RaspberryPi.
If your system package manager differ from apt / apt-get, you’ll need to adapt all command to fit your system.
Cross-compiling would not work at all (due to the TemplateHaskell requirements that I have already complained a lot), but I think you can do some cross-building what @Pascal_Lapointe did with docker.
I have a Raspberry Pi4 4GB which i intend to use for node. I`ve read a lot about NixOS but unfortunately it is a pain if not impossible to run it stable on Pi. Have you managed to run a node on Pi so far?Can you point me out ? Tips available and lots of kudos. Thanks.
I forgot to post back about my tests with cabal 3.2. No, it did not worked out…! I tried different things and building Cardano Node with latest Cabal and GHC version failed every time. Even worst, compiling on Raspberry Pi takes time and the process from zero took a whole day. I’ll stick with the 3.0 since it’s faster and… well, it just work.
ON RPI SSD STORAGE:
SSD over USB3 is effectively much faster on raspberry Pi than SD card. The other reason for using SSD instead of SD card is the long term reliability.
Rough explanation: Flash memory can only be written a defined number of time before burning out. (Don’t worry yet, the number is high). Memory card/ssd come with a controller that cycle through free memory bytes to spread the wear evenly. SSD usually comes with better controller and usually have larger capacity. Larger capacity means more space to spread the wear leading to a better lifespan.
Like @Trigger, I would propose going for 250Gb. Updating later to 500Gb or higher is trivial and you’ll leave times for the price per Gb to drop.
IMPORTANT: You’ll need a SATA-USB3 cable to build this setup. Make some research before buying. Some cables, even if they seems to work at first, won’t allow you to boot on your RPI. Even if you boot on SD card and use SSD as your root “/”. (ie: Sabrent cable have some USB3 issue, see @alessandro answer bellow)
Glad I finally found this thread. I had been trying unsuccessfully to create a RP4B node for weeks before giving up and standing nodes up on my real server using VirtualBox.
This thread proves it wasn’t “just me”.
Anywho: Wouldn’t it be better to install everything to the MicroSD, but connect the ssd/flash to the USB3 and mount it as the db folder? That way, you significantly reduce wear on the ssd/flash, improve performance, AND you could RAID1 the db. You would simply keep a clone of the MicroSD, should that fail.
That was my plan, anyway.
I was about to give it another go, for kicks. Are you all still running on RP? Did the Cabal problem ever get resolved, or are you still on 3.0?
I’ve had a Pi 4 8GB BP node and Pi 4 4GB relay node running on the testnet since January using a MicroSD card. I use a AWS c4g to build the binaries (CBA to wait for it to build on the Pi) then transfer to the Pi and launch. Started on 1.25, now on 1.27. Using GHC 8.10.4.
I am inspired to give it another go. Perhaps after Alonzo. My pool’s mission is to stand up relays in neighboring “pioneer” countries. Pi relays would be ideal, although they may be inadequate for a post-Alonzo mainnet based on what I have been reading lately. Charles mentioned optimizations will be forthcoming this year, though.
We shall see! Thanks again for the feedback. It is encouraging.