Anyone else notice that Daedalus 4.0.4 sync was immeasurably faster than 4.0.5? Anyone know why and what to do about it?
On a few machines I installed a fresh copy of 4.0.4 and it was good to go in about an hour on gigabit hardwired. Laptop took somewhat longer over 5G but that is to be expected of wireless. Updating from 4.0.4 to 4.0.5 was fine but sync on even a few days of datas seemed about as long as the initial which baffled me.
Out of curiosity I did a fresh install of 4.0.5 and fired it up. That was yesterday. The same gigabit connection it took over 20 hours to sync! When I monitored network traffic earlier today with nothing else running I was seeing over 80% CPU and memory utilization but only about 300 kpbs down?!?!? That is ludicrous as I have seen PoW miners use less resources in the distant past. This is a wallet!
When I tried importing topology from known relay nodes for my region Daedalus node crashed. When I tried increasing the valency on the relays-new.XXX default the node failed to connect and was unresponsive. Everything else I tried just made it more unstable.
I was about to try a fresh install and disabling all the tracing in the config to maybe reduce I/O but that is a shot in the dark. Apparently there are not any performance considerations at this stage and that would be fine if it would not ostracize new users. Can you imagine being in a more rural area and having to wait weeks to use an application after installing and launching it?
Has anyone successfully optimized Daedalus to connect to reliable edge relay nodes for a geographic region and/or could point me to documentation of how to do so?
Yup, I found that earlier and it was the first thing I tried.
Adding only relay nodes from my continent just caused an infinite crash loop on starting the cardano node. Of course it is entirely possible I had a typo somewhere as I used vi to filter/edit. I will give it another go … I did not try a node filter and valency > 1 together.
UPDATE: That did not work either. Most of the nodes in that explorer topology JSON seem unreliable at best so even if some are working the constant timeouts on IP subscribe and errors connecting to most probably make it a wash. I set it back to IOHK relays-new.XXX and relays.XXX but with valency 10 each and it seems to get significantly less errors when tailing the node.log even though peak down rate is still under 2mbps and not consistent at all.
I give up for now … it’s better than dial-up at least!
That’s actually a great idea in addition to the filter! I think I will use nmap however so I can script it and produce a topology file with the filtered AND responsive nodes. If this works I will post the shell script with instructions to integrate into Daedalus startup here for anyone else desiring the workaround.
I am about half done playing with python. A wrapper to launch the script then launch daedalus would be ideal. I’ll consider adding that this afternoon. Currently debating if I should make a backup of existing topology.yaml every time it runs before writing out a new one, only if there are changes, or maybe only the first time?
@Alexd1985 You seem extremely active on Cardano forums. Any chance you are also a python aficionado? I rarely use this language so it is not my strongest coding tool it just seemed the best for this particular task of automating the workaround. While I know how to use nmap on the terminal it seems this script is getting KeyError where the dictionary value for host name is not in the scan result as expected. I could switch it up to a more generalized ping check but I suspect most nodes block ICMP as a general firewall rule, at least I would if I was a stake pool operator setting up firewalls etc.
The scan method here invokes the command like nmap -p PORT -sV ADDR or equivalent. That part is working fine although a tad slow. It’s checking the result on the subsequent line to see if the host relay node port has open and available state for TCP protocol that gets a dictionary key error. Basically it can’t find the ADDR used in the scan from the result info …
I think I will trace what requests yougetsignal sends and adjust it to be similar. I might be able to just piggy back off their web server with a simple POST request instead of using nmap as a port scan is overkill for this use case.
It’s 90% done but that last 10% is usually 90% of the work
I will come back to debug and simplify this later tonight!
<p><img src="/img/flag_green.gif" alt="Open" style="height: 1em; width: 1em;" /> Port <a href="https://en.wikipedia.org/wiki/Port_3001" target="_blank" />3001</a> is open on relays-new.cardano-mainnet.iohk.io.</p>
Response Body for you-get-signal !OK
<p><img src="/img/flag_red.gif" alt="Closed" style="height: 1em; width: 1em;" /> Port <a href="https://en.wikipedia.org/wiki/Port_3001" target="_blank" />3001</a> is closed on relays.cardano-mainnet.iohk.io.</p>
Complete curl-style request for you-get-signal
curl -X POST 'https://ports.yougetsignal.com/check-port.php' --compressed --data-raw 'remoteAddress=relays-new.cardano-mainnet.iohk.io&portNumber=3001'
Yup, it was much better. Roughly 100x faster than using IOHK alone in my very unscientific test.
However during testing I exceeded the daily limit of requests on yougetsignal. Of course proxies, VPN, and other methods can be used to circumvent this but I am not interested in doing anything to bypass the reasonable expectation of usage for educational purposes.
I am emailing the owner of the site to strike up a dialog as I can understand why the limit exists to deter people from using the tools for nefarious purposes rather than development and education. He might be interested in officially hosting something for the interim that is limited to known Cardano relay nodes perhaps?
Of course if they don’t like this activity, as harmless as the intent may be, then there will need to be another workaround.
FYI: No reply from Kurt but he has probably moved on to other more interesting projects since college.
Also note 100x faster than original is still pathetically slow bordering on unusable in the 21st century. I will start tracing memory and CPU activity as there is no legitimate reason for it to be so high given the network usage is basically less than 5% of available bandwidth.
I am actually becoming more concerned with the other resource usage. Only two types of applications peg out all CPUs and hit swap and virtual memory within a few minutes of launching. The question is becoming is the Cardano implementation malicious or atrocious?
The latter means the project is full of bugs and not ready for use yet. I don’t want to talk about what the former means