Does open source + “community reviewed” really work?

Looking back at Ethereum SC hacks (DAO, Parity), I can’t help but think that the notion of “community reviewed” bestowes a false sense of security.

If I remember correctly the parity SC was also audited, which makes things even worse.

Now I understand that mistakes happen in software development. It’s inevitable.

But I wonder if there is a way to structurally protect investors from devastating outcomes.

I really like the idea of using a functional programming language or at least the “correct by construction” K-defined language. Functional programming alone probably will reduce the universe of possible exploits by an order of magnitude.

I also really like the idea of separating Settlement Layer into its own chain. It is a simple and brilliant idea that provides some structural safety for Cardano overall.

But I wonder if there is anything else that could be done to improve the security of the computational layer.

Assuming all SC code will be written in K defined language, we have some degree of built-in safety but that may not protect against choosing a bad language in the first place.

What else can be done to ensure that the codebase executed on CL lives up to the highest security standards?

Answering these questions with high degree of assurance is what will drive adoption of Cardano by financial sector.

Security is a huge factor in the financial sector. Ethereum has a horrible record of it judging not only by the outcomes but also by the architecture that allows it to happen.

In the eyes of those who make important decisions in the financial sector (such as: which technology should we use?) all these past mistakes point to potential risks.

Cardano attempts to solve these shortcomings. But I wonder if there is a way to 10x the security features on Cardano against anything available today in the overall market.

If that can be achieved, that’s when you will have a superior product and mass adoption.

Make it cheap, secure and fast and people will simply run out of reasons to stay with their current solutions.

4 Likes

Great write up @ZCryt0Knight. You’ve hit the nail on the head why Cardano’s approach will result in code that is correct and mostly defect free. As to what else… maybe you’ve heard of a piece of software called sqlite which is used… in thousands of projects and boasts incredible stability in terms of almost no bugs as well as rock solid performance. In addition to being written and maintained by solid engineers like Cardano’s, they are known for putting sqlite through some rigorous testing. Read more about it here How SQLite Is Tested.

I haven’t looked at the code base closely but tests that cover 95-100% of the code execution paths should be a standard part of developing code… in addition to all of the things you mentioned above.

Since Cardano is pulling out all the stops, or nearly all the stops to producing a high quality base/framework upon which to build for the next 4-5 decades, I would imagine comprehensive unit, system, functional and performance testing to be part and parcel of their development approach.

Ultimately it’s impossible to guarantee 100% secure and functional code - it is flawed humans that write this stuff and bugs will creep in eventually… so in addition to robust code/testing, what is also needed is establishment of processes that stay in place for as long as Cardano exists to address any upcoming concerns in a timely manner as soon as they arise.

I read somewhere that ‘perfection is achieved at the point of collapse’ … Meaning it is not achievable. But we can certainly aspire toward it… and IOHK is it - a well oiled attempt and aspiration toward a great product/future. Notice I didn’t say perfect :slight_smile:.

4 Likes

Thanks for posting a link to SQLite. I will take a look.

Yes, this is the direction I would like to explore. One way to think about this is to assume that all SC code is compromised and put in place structural roadblocks to minimize damage.

Another way, I’d think, is to constantly scan the codebase for vulnerabilities and roll out preemptive patches for affected smart contracts in the ecosystem.

Some might say, SC code security should be the responsibility of parties that create them and they would be right, except in the case of crypto platforms, mishaps tend to contaminate the image of the whole ecosystem just by way of association.

Blockchain is not well understood by the financial sector and the types of mistakes that happened in Ethereum ecosystem would make the path to adoption very hard. Even though, technically, Ethereum platform had nothing to do with those SC mistakes.

That’s why going an extra mile and beyond in security matters a lot.

2 Likes

In regards to smart contracts Charles has talked about having LiquidPlutus (based off of LiquidHaskell), which is using refinement types in order to enforce specifications on the type level so that the programs are verified/guaranteed to do what is written in the specification. Whether this is the direction IOHK will take (or possibly using hs-to-coq for verifying using the Coq language/theorem prover) I’m confident IOHK will end up deciding on a solution to enable formal verification of the smart contracts themselves and not just the languages via K. This is something I’m most curious about/I’m going to ask about at PlutusFest, so there should be more answers soon in regards to this question.

4 Likes

It would be awesome to get some answers directly from IOHK. Thank you!

I have a good degree of confidence in the stuff written in functional programming.

I think they also want to integrate quickcheck into the Plutus. Quickcheck tests the code output against the desired objectives through simulations.

This kind of property-based verification significantly boosts security without the need for formal verification which normally includes mathematical proofs. Formal verification could be time/resource intensive to do for each Smart Contract.

I wonder if something like quickcheck could be rolled out for other languages on Cardano.

Because this would be the best evidence of an active/empirical code review.

There’s already numerous property-based testing suites for various languages and some that suppurt several languages such as: https://github.com/HypothesisWorks/hypothesis

It’s very common for people to say it’s too time intensive to formally verify anything, but frankly I believe smart contracts are the #1 area in production where it is going to make sense to use formal verification at least in the near future.

Other software tends to be way too large to realistically do it, and you can just easily edit it anyways. Smart Contracts must be correct because money will be on the line (possibly billions eventually once the ecosystem is well developed and used globally by millions) and you can’t update them (possibly there could be a way to code an updating feature into contracts that allows the author to publish a new one and enable users to forward their funds from the initial to the new one but there’s a lot of hurdles and concerns that need to be addressed for that). Not to mention that smart contracts typically aren’t all the large, and once one contract is formally verified it can then be used as a trusted template for others to work from.

I think it’ll take time but at the very least I’ll be pushing for building the desire to formally verify contracts within the community. My hope is that one day a project won’t be taken seriously unless they formally verify their dApp, but well see what happens.

2 Likes

Thanks for the link. Glad to see someome has thought about that already. It would be great to see these tests integrated into CL as a requirement before publishing a SC.

All else equal I’d rather have every piece of code formally verified of course…

I was under the impression that formal verification required proofs, papers and conferences. If so it would be impractical (from a business perspective) for some smart contracts to go through the process.

This is a really good point. Maybe for some really important smart contract (the ones that rank high on possible wide use cases + amount of money involved) it makes sense to go through the formal verification process.

Ah, you seem to be confusing formal verification with peer review. Formal verification is essentially using some sort of logic (FOL, Dependent Type Theory, Refinement Types) to write a specification which can then be checked to guarantee that the code matches the spec. These generally use type systems in programming languages, however the vast majority of languages don’t have an expressive enough type theory underlying their type system in order to facilitate doing so. One point of formal verification is that there isn’t a need to have experts debate whether the code matches the spec (though they can argue about whether the spec is actually proper based off of the intended desired consequences one wants), it is formally proven and thus is correct.

Peer review and other checks and balances are still important, because if the system you design is inherently flawed, sure you can formally verify it follows what you intended, but that doesn’t mean it is actually a good system.

1 Like