This is a full report of the node diversity workshop we had last week. Posting it on the forum allows us to continue the discussion on identified next steps (see below). As multiple CIPs are part of the outcomes, this is posted to the CIPs sub-category, but ideally there would be Node development sub-category for topics around developing core Cardano components? Feel free to use this thread to provide feedback on that idea and/or the workshop outcomes. Thanks for reading and sharing the love of node diversity
Introduction
As Cardano entered the age of Voltaire with fully decentralized governance, not many things are still centralized in the Cardano ecosystem. However, all blocks are produced by a single node implementation, whose development is funded by a single company. Naturally, quite a few people set out to change that and individual aspects of the Cardano protocol have been successfully ported to other languages and technologies.
However, what makes Cardano .. Cardano? As we start discussions across multiple teams working on Cardano components - current and future - in initiatives like the CIPs and cardano-blueprint, we thought it would be a good time to meet up in person, discuss and work together on things that maintain the integrity of the Cardano network while also supporting node diversity.
Summary
At the start of Paris Blockchain Week, a group of Cardano node developers gathered at the Tweag offices in the city for a node diversity workshop. Over three days, from April 7 to 9, participants from the Cardano Foundation, Harmonic Labs, IO Engineering, Sundae Labs, Tweag, TxPipe, and other affiliations shared recent successes, exchanged new insights, and collaborated to address ongoing challenges. Using the Open Space Technology workshop format, the 25 participants set their agenda after an introductory session focused on sharing expectations. In total, around 25 sessions took place across up to three concurrent tracks.
Topics ranged from demonstrations, status updates, and experience reports to Q&A sessions on new Cardano node implementations and tools â including a look at testing practices in Ethereum . Other sessions focused on upcoming features, such as Leios, and on building consensus around client interfaces, file formats, and test vectors. A recurring theme was the importance of sharing knowledge through CIPs and the new Cardano Blueprint initiative.
Attendee list
- Adam Dean: creator, SPO, CIP editor
- Alexander Esgen: involved on
ouroboros-consensus
, Tweag - Alexander Nemish: author of
scalus
, Lantr - Andre Knispel: formal methods on ledger specs, IOE
- Arnaud Bailly: involved on
amaru
, keen on conformance testing nodes, CF - Charles Hoskinson: sponsor
- Chris Gianelloni: involved on
amaru
anddingo
, Blink Labs, joined virtually - Christos Palaskas: test engineer, lead partner chains
- Damien Czapla: developer relations on
amaru
and pragma - Josh Marchard: involved on
amaru
, SundaeLabs - JJ Siler: sponsor
- JP Raynaud: leads on
mithril
, PaloIT - Kris Kowalsky: sponsor, Modus Create / Tweag
- Juergen Nicklisch-Franken: benchmarking & tracing of
cardano-node
, IOE - Marcin Szamotulski: leads on
ouroboros-network
, IOE - Martin Kourim: leads testing of
cardano-node
, IOE - Matthias Benkort: leads on
amaru
, developedaiken
, CF - Michael Smolenski: product owner, IOE
- Michele Nuzzi: typescript node, Harmonic Labs
- Nicholas Clarke: involved on
cardano-ledger
andouroboros-consensus
, Tweag - Ori Pomerantz: contributor to ethereum/tests, only wednesday
- Paul Clark: architect to modularize things, IOE
- Pi Lanningham: works on
amaru
and Leios, Sundae Labs - Roland Kuhn: works on Leios and
amaru
, CF - Ricky Rand: sponsor, IOE
- Santiago Carmuega: leads on
amaru
, TxPipe, built many rust tools - Sam Leathers: product owner of
cardano-node
, IOE - Sebastian Nagel: knows people, IOE
- Stevan Andjelkovic: involved on
amaru
, especially (conformance) testing, CF - Vladimir Volek: works on
blockfrost
, uses nodes a lot, BlockFrost
Sessions
General update on node implementations
Also includes: Modular architecture session / Acropolis update (30min)
Facilitator: Matthias Benkort / Paul Clark
Location: Conference room
Start: Monday, 14:15
End: 15:30
Result
- Matthias started by showing a quick demo of https://github.com/pragma-org/amaru/
- Damien showed the Amaru roadmap
- Presentation of Acropolis status and behaviour
- Potential action item out of the meeting: Find a way to explicitly map out the behaviours of a node in a logic sequence of activities; reference it against the current specifications ; cross reference the gaps; document whatâs missing
Conformance testing against the ledger specification
Facilitator: Andre Knispel
Location: Meeting room
Start: Monday, 14:50
Result
- Conway was more generated test cases, e.g. difference than Alonzo -> however, more regressions?
- Framework was (too) early still and likely the generators were not up to the task
- Will we ever be sure to not need manually written test cases?
- Conformance tests need not imply that all tests are generated
- A canonical ledger state format is going to be needed / important for this
- Related to this issue: https://github.com/IntersectMBO/cardano-ledger/issues/4892
- We identified that the hand-written test cases are useful to create interesting test vectors
- We saw from plutus that test vector data is easily picked up by multiple implementation
- If we have generators where testing would be âin-the-loopâ we ought to have better shrinking (is not up-to-par for the haskell conformance test suite right now)
- Shrinking on the test vector with a validating binary in-the-loop could be possible too
- A re-usable ledger state format sounds easier than it may be
- Hashed data structures inside it?
- Ledger state not intended as an interface
- Can be described by a CDDL, but it could be bad
- It just happened to be = whatever the Haskell types are
- Mithril is also interested / interfaces the ledger state snapshot
- Testing not only the top-level interface, but also of sub-systems
- Usually states of those sub-systems are subsets
- Problem that implementations would need to agree on which and how the ledger validation is broken down into sub-problems
- Already not 100% consistent between spec and haskell implementation
- Most prone to difference are the epoch-boundary things (e.g. things in TICK)
- However, probably itâs still a good place to start on the interface any ledger implementation needs to agree on = block validation
- Equivalence check a reshuffled / alternative model of the specification possible too
- Could work if an implementation wants to structure the semantics differently
- This is already happening with the Agda spec <-> Haskell implementation
- Is this level of formal methods required or can we provide a more appropriate entry point to developing a ledger
- Testing is more easier to adopt, but tests will never prove the absence of a bug
- Status of the conformance test generators?
- Some scenarios are very hard to cover
- Conway ledger bug findings were not found by generators and still required keeping regression test cases
- Can coverage checks help us here? -> guided generation?
- Conformance test the networking like weâre doing in Leios?
- How to get good coverage
- Mainnet syncing is not too bad for that
- Golden tests (currently being worked on by Sundae)
- Reverse tests, introduce bugs and see if the tests find them
- It seems like we have agreement on how to move forward
Back pressured, deterministic processing networks
Facilitator: Roland Kuhn
Location: Piano Lounge
Start: Monday, 15:00
End: 15:55
Result
- We want to declaratively design a ânetworkâ (of processing nodes) that can be used either in a (simulation) testing context or an actual runtime environment
- What are the challenges?
- In the case of actors, itâs simple: pull in from the mailbox, process and update state
- In the case of a pipeline of âprocessesâ, things are more complicated because of âmultipleâ outputs
- Outputs can be sent to different boxes, possibly with âjoinsâ/âmeetsâ at one point
- Build a network with small building blocks connected
- We want each box to be a
(state, input) -> (state, effect)
- Async/await is just that (in disguise)
- Creates an explicit control flow and programmer is in control
- This reminded us from the main Hydra.HeadLogic.update function
- Structuring it this way helped us to test input processing deterministically
- async/await has some advantages:
- "Logicalâ and explicit control flow
- Scoping and local variables
- Alternative is to have explicit monadic âflatMapâ
- Being able to reconstruct/compile the sequential code to an explicit state machine
- Awkward syntax?
Blink Labs intro + status on gouroboros / dingo
Facilitator: Sebastian Nagel
Location: Conference room + Google Meet https://meet.google.com/acg-irnp-das
Start: Monday, 16:00
End: 16:45
Result
Tracing and Observability in multi-node era
Facilitator: Juergen Nicklisch-Franken
Location: Piano Lounge
Duration: 15min
Start: Monday, 16:00
End: N/A
Why Tracing Matters in a Multi-Node Era
âAs we revolutionize the Cardano node, weâre entering a new phase: one where diversity is not just in stake pools or smart contracts, but in the nodes themselvesâdiverse implementations, optimized for varied use-cases, platforms, and performance profiles.â
âWith new Cardano nodes being built in Haskell, Rust, Typescript, and beyond, tracing provides a unifying interface across these heterogeneous systems. Itâs the heartbeat of observabilityâand the foundation of accountability.â
Key Roles of Tracing in This Landscape
- Cross-Node Observability:
Unified traces give us a clear, consistent way to see how each node behaves, performs, and fails. - Accelerated Debugging:
Tracing provides the evidence of execution. It speeds up root-cause analysis and reduces guesswork in triage. - Trust & Interoperability:
To ensure independent implementations can interoperate safely, we need to observe and compare their behavior under identical network conditions. Tracing enables this.
From Observation to Formal Analysis
âBeyond just watching the system, we want to ask formal questionsâand get reliable answers.â
Thatâs where Linear Temporal Logic (LTL) comes in. It lets us write temporal assertions like:
G((submit_tx â§ validated_tx) â F included_in_block)
(Every submitted transaction that is validating will be included in a block)
G((submit_tx â§ ÂŹvalidated_tx) â G ÂŹincluded_in_block)
If a transaction is submitted but not validated, it must never be included in a block!(rollback_depth > k)
(The chain never rolls back more than k blocks)
These properties arenât just ideasâwe can run them over actual trace logs and validate system behavior over time.
The Importance of Conformance/System Testing in a Diverse Node Environment
âIn a world of many nodes, what matters isnât just that each one runsâbut that all nodes behave consistently across the network.â
Conformance testing, with realistic workloads and precise trace analysis, becomes our strongest ally. Itâs how we give proof of compliance, detect divergence. Itâs how we harden the protocol.
A Concrete Project Roadmap for Trace-Based Validation
To bring this vision to life, we propose the following plan, as a base of discussion:
- Identify Required Traces:
Define the minimal set of trace events that every node must emitâfor comparability, diagnostics, and correctness checking. - Formalize the Trace Specification:
Create a schema or grammar for trace events. This ensures all nodes speak the same observability language. - Build a Library of Invariants in LTL:
Express critical system guarantees using Linear Temporal Logic. These invariants become our automated test oracles. - Uniform execution environment for distributed tests:
Build targeted test casesâpartitions, bursts, bad peersâand use trace analysis to confirm whether invariants hold in each case. - Link LTL Checks to Test Scenarios:
Each test should come with a set of expected properties, expressed in LTL, and validated against the trace.
Closing
âTracing is how we understand Cardanoâs behavior today. With formal specifications and testable invariants, itâs also how we ensure its correctness tomorrowâacross all implementations.â
âLetâs build not just diverse nodesâbut a unified way to prove they work. Tracing is the path. Letâs walk it, test it, and trust it.â
Coroutines/Generators vs. explicit state machines
Facilitator: Roland Kuhn
Location: Piano Lounge
Duration: 30min
Start: Tuesday, 09:30
We discussed the example of the chainsync handler in the new downstream server (i.e. responder) in Amaru. We agreed right away that a raw state machine written as a giant match statement on (state*input) is not the way to go. There are some Haskell implementations that already do that, but in Haskell a better way is to structure longer sequences of actions using monadic comprehensions, i.e. continuation passing style. In Rust, the best means for expressing such logic is async/await code, where an async function gets compiled by Rust into an enum with one state per .await point (in effect generating the giant match statement for the developer behind the scenes).
The important part is that all effects are captured by the API, either as monad constructors in Haskell or Future (the values that can be .awaitâed) constructors in Rust. One very nice aspect in Haskell is that the intermediate states can be logged without any additional effort, in Rust the generated Future cannot be logged and the developer needs to manually add logging effects.
RPCs
Facilitator: Santiago Carmuega
Location: Piano Lounge
Duration: ~1h
Start: Monday, 16:15
Result
TODO: Santiago to add more
- C bindings?
- UTXO-RPC
- Blockfrost as a popular API
- JSON-RPC = EVM-style
- Some agreement that the RPC layer should be part of the spec
Discovering, Documenting and Educating the ledger and network behavior
Facilitator: Pi Lanningham
Location: Meeting Room 1
Duration: ~30min
Start: Monday, 16:30
Result
- Documentation feels sparse and has holes
- Motivation is a missing component
- Agda spec is impressive, but very challenging to understand
- Code as spec?
- If spec is derived from code, do we not require that the specification contains implementation details
- Generally a design document (can differ from whatâs shipped) and a spec (that defines whatâs shipped) seems important
- It should be the responsibility of each participant to document their codes
- Cardano blueprint CI checks for broken links⌠Can this be useful for node development and documentation?
Antithesis Intro + Brainstorming + Test scenarios
Facilitator: Arnaud Bailly
Location: Conference room
Duration: ~2h
Start: Tuesday, 9:30
End: 10:45
Result
- Arnaud showed the antithesis interface
- https://github.com/cardano-foundation/antithesis holds the container definitions
- Reports are public, but hidden links
- How can we make this more available to the community? Have others join in exploring?
- Maybe through a kind of a working group
- If there are findings, we are conscious about security implications
- Also related to the idea of tartarus
- https://github.com/cardano-scaling/tartarus
- Could help extending the antithesis system-level fault injection with more adversarial actions
- Also mentioned jepsen https://jepsen.io/
- Looks interesting, but we should not rely too much on a commercial offering (lock-in)?
- The domain knowledge of adversarial fault injection (tactics) will remain within tartarus et al, while antithesis could (if it is good enough) help us in the reporting and debugging interfaces
- Would this even work with big networks of nodes (100+)?
- Maybe not, and it is more about something between bigger testnets and very focused local tests (which individual implementations do)
- The bigger adversarial network, which is rigged against you (= tartarus) would still be very useful to have!
- Also for people to demonstrate the viability of attacks! (eclipse, sybil, what can you do with 40% stake?, long rollbacks)
- Thought about testing pyramid, from modules to systems, to networks systems to full scale networks
- Kind of matches software readiness levels (SRL) - works on the bench, in the lab, in a controlled environment, out in the wild
- Also: disaster recovery procedures - do they work? We should have DR drills on testnets every few months (e.g. on tartarus)
- Concretely with antithesis, opentelemetry support would simplify instrumentation
- How to stay in touch?
- Regular blog posts, at least in the first phase of evaluation:
https://cardano-foundation.github.io/antithesis/kick-off-antithesis/ - Ask to share interesting test cases (maybe the IOE testing team has some?)
- Regular blog posts, at least in the first phase of evaluation:
Property-based testing state machine testing
- simulation testing (practical techniques)
Facilitator: Alexander Esgen
Location: One of the meeting rooms with a whiteboard
Duration: ~60min
Start: Tuesday, 10:00
End: 11:00
Result
- History on property-based testing by Stevan A
- Concurrent state machine testing based on âLinearizability: A Correctness Condition for Concurrent Objectsâ by Herlihy and Wing
- Used by Jepsen (at least by Knossos)
- âTesting Distributed Systems w/ Deterministic Simulationâ by Will Wilson
- Maelstrom
- Very ad-hoc notes during the meeting
- Better: Writeup by Stevan: https://github.com/pragma-org/simulation-testing
Zero Knowledge and ledger state transition
Facilitator: Matthias Benkort
Location: Meeting room
Duration: ~30min
Start: Tuesday, 9:30
End: ~10:05
Result
- Discussed how Amaru core crates cross-compile to web-assembly & RISC-V, thus enabling the use of zero-knowledge frameworks to prove âarbitraryâ elements on the ledger or the consensus.
- The primary use-case discussed for this is bridging with partner chains / side chains / L2s, as it could allow trustless portable proofs of either part of the ledger state or specific actions on that ledger state.
- We discussed the synergies with Mithril and light clients as well.
SPO working group feedback and feature discussion
Facilitator: Damien Czapla
Location: Meeting room
Duration: ~30min
Start: Tuesday, 10:02
End: 10:26
Result
- Shared list of current requests from SPOs met: Miro
- Next steps discussed with the people present: build a survey with a feature request; join the SPO call of IOG; encapsulate the results into the roadmap of Amaru
Incentivized testnet v2
Facilitator: Happened spontaneously
Location: Meeting room / Lunch room
Duration: ~15min
Start: Tuesday, 10:30
End: 10:45
Result
- Common realization and agreement that (a) testnet networks (PreProd & Preview) are rather expensive to run and (b) come with little incentives to do so.
- Yet, testnets are a useful source of data for validation and conformance, so the overall robustness of the network in a multi-node era will greatly benefit from better testnets.
- We discussed possibly introducing bounties linked to specific test conformance scenarios missing from the current testnets but that would be useful to observe.
- To make this attractive for both âbounty huntersâ and for SPO, we mentioned block producers involved in the creation of the test scenarios as part of the rewarded parties. This creates an incentive for SPOs
- It was also mentioned that ideally, this should be created as an effort that is âneutralâ (i.e. not by one of the usual entities).
- Maybe within https://github.com/cardano-scaling/tartarus?
Rust software design session
Facilitator: Ad-hoc, Arnaud Bailly
Location: Piano Lounge
Duration: ~60min
Start: Tuesday, 12:00
End: Tuesday, 13:05
Result
- Discussing on looking at some Rust code (consensus, chain selection?)
- Some common pattern in Rust: sans-IO: The secret to effective Rust for network services
- Discussed (again) various ways of writing state-machines code in Rust: direct approach (eg. match over state/transition), await/async, free monad style modellingâŚ
- Event sourcing the cardano ledger: separating validation from state updates could be useful to reconstruct the ledger state not only for purpose of validating more transactions, but also to âunderstand the system stateâ for downstream consumers / clients
Looking forward to Leios
Facilitator: Pi Lanningham
Location: Conference room
Duration: ~60min
Start: Tuesday, 12:00
Result
- Started earlier on-demand as we had an additional hour between reflection and lunch
- TODO: Pi or Sam have made some pictures / notes
Buying time for Leios
Facilitator: Matthias Benkort
Location: Conference room
Duration: ~30min
Start: Tuesday, 14:00
End: N/A
Result
-
Explored what it could mean for Leios to be preserve âbackward-compatibilityâ. Two ideas were explored:
- Ensuring close compatibility of Ranking Blocks (RBs), by âinlining IBs & EBsâ inside locally stored and historical RBs. In essence, itâs about erasing the details behind the construction of RB and treat them *as if* they had been created as large Praos blocks from the start. The goal is to effectively make Leios an "ephemeral off-chain protocolâ that only occurs at the tip of the system in order to produce blocks. Once produced and diffused, the details of the block construction could be erased.
- We noticed how preserving hashing between the construction format and the final âcompiledâ format could be challenging but seems doable using some transformation.
- Minor changes to Praos blocks might still be required (in particular, to preserve endorsement certificates).
- A second idea discussed was about making Leios âincrementalâ or opt-in, so that future implementations down the line (i.e. beyond Amaru, Dingo, Acropolis etc⌠who are already in the loop) could possibly take part into consensus without having to implement the whole beast.
- This seemed a lot more challenging, and something apparently considered at the beginning of the design discussions but dropped later because it was too hard to reconcile with the design.
- Ensuring close compatibility of Ranking Blocks (RBs), by âinlining IBs & EBsâ inside locally stored and historical RBs. In essence, itâs about erasing the details behind the construction of RB and treat them *as if* they had been created as large Praos blocks from the start. The goal is to effectively make Leios an "ephemeral off-chain protocolâ that only occurs at the tip of the system in order to produce blocks. Once produced and diffused, the details of the block construction could be erased.
-
We discussed the idea of increasing the minimum inter-block time, to avoid the extra constraint on the system of being ready to produce two blocks back-to-back. This requires further discussion with researchers who can help put in equations some of the considerations -> S.N. said he knows who to put in the room.
-
Briefly discussed CIP-150 ( https://github.com/cardano-foundation/CIPs/pull/993 ) about block compression. The idea is interesting but seems mostly promising for large chunks of data, and in the context of a very beefy machine. We however noted that an approach more tailored to our domain could definitely help to squeeze more data in a same block size.
Canonical certification with Mithril
of Ledger state and Immutable files
Facilitator: Jean-Philippe Raynaud
Location: Piano Lounge
Duration: ~60min
Start: Tuesday, 14:00
Result
- 2 problems: serving canonical snapshots and forgetting history
- Discussing a common format for serving Chain data, like parquet files, needs to be easy to implement read/write
- Need to be deterministic, random access, sequential read
- Need to write the requirements -> trigger the CIP
- Ledger is not signed rn because of discrepancies in the snapshot production
- Some security concerns with timing attacks if time to make snapshot is predictable
- Can be mitigated with random jitter
- With UTxO-HD it gets more complicated
- There were never a reason to exchange ledger states before Mithril and other nodes
- Canonical ledger format?
- Not really time sensitive -> canonical format could be computed independently
- Dumping has to be first
- Partial/splittable in chunks?
- Part
- UTxO
- PParams
- Stake distribution mark/set/go
- Delegation map
- Treasury pot
- Epoch nonces (edge case)
- Rewards accounts
- Governance stuff
- Pool state
- âŚ
- We could have extensions, eg. derived computations
- Need versioning of the format
- Unsigned metadata
- Is the format convenient or minimal?
- Any data thatâs redundant can be made invalid (more easily)
- Discussing how to produce data incrementally?
- Increased frequency of snapshots could lead to unsustainable download size
- Discussing whether itâs important to have a format close to LSM tree (which is how cardano-node w/ UTxO-HD will store it)
- Next steps:
- Write a (canonical) CDDL of a ledger state target format
- Write a tool to convert whatever cardano-node currently has to this new format
- See https://github.com/IntersectMBO/cardano-ledger/issues/4948
Light clients
Facilitator: Santiago Carmuega
Location: Piano Lounge
Duration: ~60min
Start: Tuesday, 15:00
Result
- One potential way for a light client: Get data necessary to validate headers for the current epoch (stake distribution, header state (nonce) etc; e.g. via Mithril), and then validate just headers and select the longest one.
- Needs more research what guarantees doing this provides exactly
Simplifying the specs/behavior
- Forgetting the past
- Conformance testing action plan
Facilitator: Nicolas Clarke / Matthias Benkort / Andre Knispel
Location: Conference room
Duration: ~60min
Start: Tuesday, 15:00?
Result
- Several operations in the node are performed âimplicitlyâ (without any visible action on-chain) at epoch boundaries. That is the case for example of governance proposal refunds. We discussed making refunds of governance action proposals and SPO deposits with transactions instead of at the epoch boundary
- Should write a CIP for this, probably Andre and Matthias
- We also discussed making enactment of governance actions explicit (possibly via the same refund transaction), to allow indexers down the line to access governance outcome without the need to implement the entire governance protocol.
- Simplify the multiple delegation & registration certificates -> Also CIP
- Hash some information contained in the ledger state, most notably the stake distribution and include it in blocks, or somehow transmit this information off-chain
- This allows for dynamic sanity checks for alternative nodes that complements âhistorical conformance testsâ
- âForgetting the pastâ could be interpreted in two ways:
- Getting rid of the legacy code that is validating and processing old eras.
- Not keeping the chain history around in block producing node
- The latter is more controversial and would likely require some âarctic vaultâ program to ensure availability of the historical data if necessary.
- The former seems to make more consensus, and has good potential to enable faster development down the line. The only concerns were of âphilosophical natureâ.
Make existing testing tools useful for all node implementations
also review existing tools for testing cardano-node -> look at code
Facilitator: Arnaud Bailly
Location: Piano Lounge
Duration: ~1h30min
Start: Tuesday, 16:00
Result
- https://github.com/input-output-hk/ouroboros-consensus/blob/753c40ab9e76f9f389ad35c59feee2f68dc24a68/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/BlockTree.hs#L143
- Generator for BlockTree: https://github.com/input-output-hk/ouroboros-consensus/blob/753c40ab9e76f9f389ad35c59feee2f68dc24a68/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/Genesis/Setup/GenChains.hs#L98
- 2 approaches possible:
- Use Haskell code to write CLI tools to generate headers/blocktree
- Write adversarial nodes using header/chain generation and adversarial strategies (this one plays nicely with Tartarus/Antithesis)
- Tools part of consensus that could be useful:
- Db-analyser: runtime/benchmarking over historical chains, allow to find intersection between 2 immutable DBs
- Db-synthesizer: generate a chain out of node credentials, blocks are always empty, could be used to inject arbitrary txs
- Db-truncater: remove suffix of the chain up to some slot
- Could be used for disaster recovery, eg. rollback everyone to some âgoodâ point and then synthesize some good prefix
- Immdb-server: light node speaking chain sync / block fetch but does not maintain ledger state, useful for benchmarking syncing
- Db-immutaliser move a chain from volatile to immutable -> useful for analysing volatile part of the chain, select a particular chain
- Example idea:
- Use db-analyser to generate a ledger state at some point in your volatile chain, then use db-immutaliser
- Cardano-streamer
Risks - if cardano doesnât exist in 5y, what killed it?
Facilitator: Adam Dean (originally Pi Lanningham)
Location: Lunch room
Duration: 30min
Start: Tuesday, 16:00
End: 17:07
Result
- Create a document about this? (someone suggested that on the agenda setting)
- Adam took notes in his notebook
- Governmental or regulatory threats were considered the largest risks
- On-chain governance also presents risk to lack of evolution
- Running out of Treasury/Reserves without attracting significant adoption
- Side/Partner Chain eclipses/drains value from Cardano
- Running out of ketamine
Ethereum/tests experience report (Ori Pomerantz)
Facilitator: Sebastian Nagel
Location: Conference room
Duration: 1 hour
Start: Wednesday, 9:30
Result
- Ori introduces two types of tests that are done in ethereum/tests: State and Blockchain tests
- Each contain a prestate, txs and prostate
- System under tests are typically execution clients like Geth and Reth
- Test driver can re/set the state before tests
- How much confidence does passing these tests give you?
- Quite high, if a client passes these tests itâs usually accepted as good-enough
- Whatâs the social process around those tests?
- Who writes them or gets to decide which tests are there?
- Quite open and additional tests are usually well accepted
- Hosted by ethereum foundation, but not strict endorsement (requirement) to need to pass them
- Ethereum foundation pays for the maintenance and infrastructure (NB: they also pay for Geth development)
- Performance concerns are not in scope of these tests right now
- Any regrets?
- LLL (list like language) is maybe not the most approachable
- LLL was part of the original ethereum paper
- YUL seems to be the more modern way to express test scenarios
- The consensus level tests are not in scope (ethereum/tests is âpost consensusâ) and other projects like https://github.com/ethereum/consensus-spec-tests seem to be covering that
- Testing error cases?
- Yes, and even errors (failed transactions) would change balances
- Any error specification besides balances?
- A reverting contract could be detected and be persisted + asserted
- To what specification are tests linked?
- To the yellow paper https://ethereum.github.io/yellowpaper/paper.pdf
- This basically covers everything execution related
- No explicit links between tests or checking coverage
- To the yellow paper https://ethereum.github.io/yellowpaper/paper.pdf
- Are there tests about re-entrancy?
- Testing for things that should not work, but work -> this is usually bad
- What is in the state of ethereum?
- For all addresses: balances, code, storage and nonce
- And other things relevant for the consensus layer
- Storage
- 256 bit key -> 256 bit value
- counts, names, mappings
- values bigger than 256 bits they would get compiled into using multiple entries
- How are implementations coordinating on features?
- e.g. the clients listed here https://clientdiversity.org/
- All-developers call that discusses on EIPs (10s of attendees)
- Geth is usually able to veto (and force others to slow down)
- Different for Reth where they are often expected to catch up
- https://ethereum-magicians.org/ is an exchange forum used by the developers
- Who sets the date of a hard-fork
- Consensus between developers
- Is it pre-scheduled? No
- Staged update of testnets one after another (at least two)
- How is it incentivized? -> Self interest of client implementations
- EIPs contain all the information and eventually would be covered by specs and tests
- Is client diversity incentivized?
- Not exactly sure, but consensus seems that it should be between
33%
and66%
- Not exactly sure, but consensus seems that it should be between
- Any major discoveries / incidents covered by the ethereum/tests
- Diverging fees between clients was discovered on mainnet and only later covered by a conformance test (by increasing coverage of keys)
- Feature density of hard-forks? Has it happened that it was scoped down because of multiple clients?
- How long was it one client? Never, there were already 2-3 when the Whitepaper was written.
- Was seen as a way to make sure that the clients behave consistently with the yellow paper
- Whatâs the typical breakage of DApps on a hard-fork?
- Ideally zero, but happened in the past (0xEF example)
- What about the API level?
- Inter-node can be updated between hard-forks
- RPC (client interface) is very rigid = tried to never break, only to extend
- Are there different types of clients?
- Not everyone will have the deposit for running a validator, so yes quite naturally
- Typically using the same codebase as validator nodes
- Are there rollbacks on Ethereum?
- reorgs rarely happening
- finality after 2 epochs of 32 blocks ~= 13 minutes
- DApps would typically wait for a couple of minutes to be really sure
Future inter-node forum / Are CIPs the way to standardize cardano still?
Facilitator: Nick Clarke
Location: Conference room
Duration: 30min
Start: Wednesday, 10:45
End: 11:10
Result
- CIPs are good for bigger conversations and gathering feedback on new development
- Was very useful for the plutus changes for example
- Is a categorization of protocol changes and conventions useful? i.e. ERC equivalents
- Keeping up with CIPs is an issue
- This could help to filter and only focus on relevant parts
- Opposed of keeping track there is also categorization to get the right reviewers
- Any inter-node forum needs to be on "neutral groundâ
- The intersect discord is for example not that -> hard to join
- Would be useful to have focused CIP editor meetings -> this is currently happen partly in working groups (created originally for/with intersect)
- How to continue conversations we have here
- Some might be CIPs
- But how to follow up on other things?
- A forum format with threads sounds useful
- Github PR/discussions is what we have now, works but can get messy
- There is the cardano forum
- Should we just add a ânode developmentâ forum? https://forum.cardano.org/c/developers/29
- Github also has discussions, e.g. https://github.com/cardano-scaling/cardano-blueprint/discussions
- Most importantly, the findings should be permanent, searchable and easy to catch-up on
Types of node implementations + their requirements
Facilitator: Damien Czapla
Location: Lunch room
Duration: 30min
Start: Wednesday, 10:30
End: 11:05
Result
- Agreement on the purpose of categorizing the node implementations: having a common set of requirements and a batteries of test that you have to fullfill in order to call yourself a âX nodeâ
- Here is the current understanding that came out of the session
â Next steps: make this a recurrent topic to be discussed during the inter-node governance/meetings/design decisions alignments
Action items
- SN: Create a place to continue conversation about node diversity
- Session: Future inter-node-forum
- A place to continue the conversation about where to continue the conversation: discord.gg/sn2HMm8eYs
- Publish the workshop report in a place where we could continue the conversation?
- MB: Draft a CIP on dynamic fingerprint validation in block headers
- Session: Simplifying the specs/behavior
- JM: Draft a CIP on cleaning up certificates
- MB: Draft a CIP on explicit governance action deposit returns
- Session: Simplifying the specs/behavior
- PC: Write a (canonical) CDDL of a ledger state target format
- Similar, but not exactly what is used today by the Haskell cardano ledger
- See also CDDLs for ledger state snapshots ¡ Issue #4948 ¡ IntersectMBO/cardano-ledger ¡ GitHub
- Session: Canonical certification with Mithril
- Canonical ledger state - HackMD
- JPR: Draft a CIP for the proposal of a canonical blocks exchange format, canonical ledger state exchange format and canonical time of creation of the ledger state snapshot (for Mithril certification in the context of fast bootstrapping a node)
- Include the CDDL for ledger state format by Paul Clarke
- Session: Canonical certification with Mithril
- SC: Create a RFC what a light client is and seek for comments in a (soon to exist) node diversity forum
- Session: Light clients
- SN: Create a N2C api specification to capture what is available today
- AB: Implement a generator that can produce blocktree in a standard format to test correctness of consensus code
- Session: Tools for testing nodes
- AB: Implement an âAdversarial nodeâ that simulates adversarial behaviour, using Blocktree generator, and connects to other nodes
- Session: Tools for testing nodes
- AB: Start building a test cases DB within Tartarus project collecting interesting/significant test scenarios
- Session: Antithesis intro
- AB: Refactor Amaru consensus to ease implementing deterministic testing
- Session: Rust design session and others
Conclusion
Judging from feedback in closing sessions, the workshop was a resounding success and everyone enjoyed the open exchange of ideas as organizational affiliations became blurred during lively discussions. One participants take-way message for example was:
Node diversity is not only inevitable, itâs beneficial. By collaborating with various teams and entities we solidify our protocols and specifications, our documentation and start doing things that are right for everyone, not just our implementation.
One of the last sessions was about a âfuture inter-node forumâ, essentially where to continue the discussion about node diversity and how can the momentum of the workshop be utilized best? At first we discussed a bit in a dedicated Discord channel, but requirements of discussions to be easy to catch-up on, notified via email and searchable.
This made us want to evaluate the Cardano forum as a potential venue for future cardano node development discussions. Posting this report and referring readers here is the first step to this evaluation and an improved level of communication between the individual node implementation teams.