Skip to main content

· 2 min read
Alexey Kuleshevich

High level summary

As part of tackling tech dept we have worked on improving the entrypoint interface that consensus uses to interact with ledger. Besides that we worked on restructuring some parts of the ledger state representation to make a more type safe distinction for changes introduced in Conway era. We have also implemented an alternative way of deserializing types that live on chain, which once intergrated into downstream components will allow us to have faster and more accurate decoders.

Low level summary

Features

  • pull-4907 - Remove no longer needed workarounds
  • pull-4890 - Invert mempool
  • pull-4861 - Convert CertState to a type family
  • pull-4846 - Non-Annotator DecCBOR instances
  • pull-4903 - Switch to using TxCert type family
  • pull-4889 - Change ApplyBlock interface
  • pull-4905 - Move EraGov interface into cardano-ledger-core
  • pull-4895 - Add ToCBOR/FromCBOR instances for Genesis config types

Testing

  • pull-4865 - Enable Imp conformance for GOV
  • pull-4887 - Add Imp fixup for collateral return txout
  • pull-4897 - Rename ImpTest helpers
  • pull-4908 - Add type parameter to KeySpace and GenEnv

Infrastructure and releasing

· 2 min read
Jean-Philippe Raynaud

High level overview

This week, the Mithril team continued implementing incremental certification of the Cardano database, focusing on feature stabilization and production readiness. They also worked on supporting the upcoming Cardano node v.10.2 and documenting the certification process for each certifiable data type.

Finally, the team completed cleaning up legacy code from the 'Thales' era and started working on a slave mode for the aggregator signer registration.

Low level overview

  • Completed the issue Support evolving cloud artifact locations type to avoid client breaking change #2293
  • Completed the issue Build client WASM fails with Rust 1.85 #2325
  • Completed the issue Cleanup legacy code following Pythagoras era activation #2316
  • Worked on the issue Enhance artifact structure for Incremental Cardano DB #2291
  • Worked on the issue Enhance computation of required disk space for Incremental Cardano DB in client CLI #2292
  • Completed the issue Implement an example for Incremental Cardano DB #2330
  • Completed the issue Document Cardano transactions signature and proving in website #1700
  • Completed the issue Document Cardano stake distribution signature in website [#2297](https://github.com/input-output-hk/mithril/issues/2297
  • Completed the issue Document Cardano full database signature in website #2298
  • Completed the issue Document Cardano incremental database signature in website #2331
  • Worked on the issue Hydra CI fails with OpenSSL error on Linux x86_64 runners #2295
  • Worked on the issue Upgrade to Cardano 10.2 #2333
  • Worked on the issue Implement a slave mode for the signer registration in the aggregator #2334

· 4 min read
Michael Karg

High level summary

  • Benchmarking: Plutus memory budget scaling benchmarks; UTxO-HD benchmarks, leading to a fixed regression; Genesis benchmarks.
  • Development: Ouroboros Genesis and UTxO-HD adjustments in workbench; Maintenance tasks.
  • Infrastructure: Removing outdated deployments; Haskell profile definition merged; workbench simplification ongoing.
  • Tracing: C library development ongoing; Feature implementation according to community feedback; test removeal of old system.

Low level overview

Benchmarking

We've run and analyzed scaling benchmarks of Plutus execution budgets. In this series of benchmarks, we measured the performance impact of changes to the memory budgets (both transaction and block). We observed an expected, and reasonable, increase in certain metrics only. Furthermore, we've shown this increase to be linearly correlated to the budget raise. This means that when exploring the headroom of those budgets, the performance cost for the network is alawys predictable. The benchmarks serve as a base for discussing potential changes to those budgets in Mainnet protocol parameters.

When building a performance baseline for UTxO-HD, we were able to locate a regression in its new in-memory backing store, LedgerDB V2. We created a local reproduction of that for the Consensus team, who was able to successfully adress the regression. A corresponding benchmarking report will be published on Cardano Updates.

Furthermore, we've performed benchmarks with the Ouroboros Genesis feature enabled and compared them to the release benchmark baseline. We could not detect any performance risk to the network during "normal operations", i.e. when all nodes are caught up with the chain tip.

Development

During the course of building performance baselines for Ouroboros Genesis and UTxO-HD, we've developed various updates to the performance workbench to correctly handle the new Genesis consensus mode, as well as adjustments to the latest changes in the UTxO-HD node.

Additionally, we built several small quality-of-life improvements for the workbench, as well as investigated and fixed an inconsistent metric (Node Issue #6113).

Infrastructure

The recent maintenance work also extended to the infrastructure: We've removed the dependency on deprecated environment definitions in iohk-nix by porting the relevant configurations over to the workbench. This facilitates a thorough cleanup of iohk-nix by the SRE team.

As the Haskell package defining benchmarking profiles has been merged, and all code replaced by it successfully removed, we're now working very hard on simplifying the interaction between the workbench and nix. This mostly covers removing redundancies that have lost their motivation, applying to both how workbench calls itself recursively multiple times, as well as how (and how many) corresponding nix store paths are created when evaluating derivations. This will both enhance maintainability, and result in much lighter workbench builds locally - but especially on CI runners.

Tracing

Work on the self-contained C library implementing trace forwarding is ongoing. As forwarding is defined in terms of an OuroborosApplication, it's non-trivial to re-implement the feautures of the latter in C as well - such as handshake, version negotiation, and multiplexing. However, for the intended purpose of said library, it is unavoidable.

We've also started working on a new release of cardano-tracer, the trace / metrics consuming service in the new tracing system. This release is geared towards feature or change requests we've received from the community and found very valuable feedback. Having a seperate service to process all trace output enables us to react much quicker to this feedback, and decouple delivery from the Node's release cycle.

Last not least, we're doing an experimental run on creating a build of Node with the legacy tracing system removed. As the system is deeply woven into the code base, and some of the new system's components keep compatibility with the old one, untangling and removing these dependencies is a large endeavour. This build serves as a prototype to identify potential blockers, or decisions to be made, and eventually as a blueprint for removing the legacy system in some future Node release.

· One min read
Damian Nadales

High level summary

  • Added a document that discuses ticking and how its used within the Consensus layer (#1385). The rendered version of this document can be accessed in our documentation page.
  • The benchmarks for the UTXO-HD version of Node with the in-memory backend confirmed that its resource usage is on par-with the baseline version of the Node. There is a slight decrease in CPU usage (-9%), and a slight increase in memory consumption (+3%).
  • Fixed the mempool snapshotting regression in the UTXO-HD branch (from +185% to -21%) (#1382).
  • Added a Consensus section to the Cardano Blueprints (#7).
  • Held the technical-working group meeting. The recording can be accessed using this link. In particular, the importance of the KES agent and its roadmap were discussed during this meeting.

· One min read
Noon van der Silk

High-level summary

The team is very excited to have the Hydra explorer up-and-running again and now observing over 1,000 heads across all networks and versions! Memory improvements and network resiliance continues to be our focus.

What did the team achieve?

  • Fixed the hydra-explorer to track multiple vesions of Hydra #1282
  • Made progress on the etcd-based network stack #1720
  • Fixed bug around malformed party information crashing a Head #1856
  • Progress on reduced memory footprint for running a Hydra Node #1618

What's next?

  • Continue to work on memory usage enhancements #1618
  • Continue working on new networking stack #1720
  • Start work on new approach to "Getting suck" problems; resetting to a previous snapshot #1858