Skip to main content

· 2 min read
Jean-Philippe Raynaud

High level overview

The Mithril team worked on creating the new distribution pre-release 2445.0-pre. Additionally, they completed recording the aggregator metrics in the event store and implemented stable support for Cardano node v.10.1. The team also kept exploring solutions for signer registration when multiple aggregators are running on a Mithril network.

Finally, they implemented a nightly workflow in the CI to check backward compatibility with previous distributions and started implementing a new status route in the aggregator.

Low level overview

  • Completed the issue Record aggregator metrics in event store #2023
  • Completed the issue Refactor protocol parameters namings in signer/aggregator #1966
  • Completed the issue Docker nightly builds in GitHub Actions #2026
  • Completed the issue Nightly backward compatibility testing with e2e tests #2027
  • Completed the issue Access registered signers for latest epoch in explorer #1689
  • Completed the issue Remove pending certificate from explorer #2025
  • Completed the issue Upgrade to Cardano 10.1.1 #2069
  • Completed the issue Create view for registrations monitoring in aggregator #2067
  • Completed the issue Update Cardano CLI calls to new interface #2072
  • Worked on the issue Release 2445 distribution #2030
  • Worked on the issue Create a new /status route in aggregator #2071
  • Worked on the issue Remove network field from CardanoDbBeacon #1957
  • Worked on the issue Explore Signer Registration Solutions #2029

· One min read
Kostas Dermentzis

· 3 min read
Michael Karg

High level summary

  • Benchmarking: Governance action / voting benchmarks on Node 10.0; performed PlutusV3 RIPEMD-160 benchmarks.
  • Development: Governance action workload fully implemented; generator-based is submission ongoing work.
  • Tracing: New tracing system production ready - cardano-tracer-0.3 released; work advancing on typed-protocols-0.3 bump and metrics naming.

Low level overview

Benchmarking

We've run and analyzed the new voting workload on Node 10.0. This workload is a stream of voting transactions submitted on top of the existing value workload from release benchmarking. The delta in the comparison can claim to demonstrate the "performance cost of voting" in the Conway ledger era. The workload itself is a puppeteer of 10.000 DReps overall, who vote on up to 5 governance actions simultaneously. We made sure these are mutually independent proposals, that vote tallying occurs, and that the actions get ratified and enacted (and hence removed from the ledger). Then, voting moves on to the next actions - keeping the number of actions needing vote tallying stable over the benchmark. We could observe a very slight performance cost of voting; it's deemed to be a reasonable one given the stress we've put the system under.

The results can be found here along with those from release benchmkarks.

Furthermore, we've developed and run a new Plutus benchmark targeting the RIPEMD-160 internal. We've compared the resulting performance observations against other Plutus workloads - both memory-constrained and (same as RIPEMD-160) CPU-constrained. We have concluded that there are no performance risks to that algorithm in PlutusV3, given existing execution budgets, and that it's consistently priced wrt. other CPU-intensive internals.

Development

The voting workload is currently implemented using decentralized submission via cardano-cli on each of our cluster machines. It has proven reliable - and scalable, at least to some extent. We're already working on improvements that reduce the (very slight) overhead of using the CLI for submission. Additionally, we're aiming for a linear performance comparison when submitting twice the number votes per transaction at the same TPS rate - forcing double the work for vote tallying.

Implementation of that workload using the centralized (and much better scalable) tx-generator submission service is still ongoing.

Tracing

Metrics naming is currently receiving a last round of consistency checking, so that it's aligned as closely as possible between legacy and new tracing system. In the process, we're adressing aspects of documentation, and incorporating feedback to define a few metrics in the new system that previously were present in the legacy one only.

For migrating to the new typed-protocols-0.3 API, two of the new tracing system's packages are affected. The work for ekg-forward-0.7 is completed and merged to master - yet to be released on CHaP. Work on the second package, trace-forward, is ongoing.

We've finally released cardano-tracer-0.3, which incorporates all features, enhancements and fixes that have been reported on here over the past months. This release marks 100% production readiness of the new tracing system. We're focusing now on making documentation and example scripts and configs yet more user-friendly for community rollout. We're very much looking forward to receiving feedback - and have time and space reserved to address it, as well as to provide intial support for the migration away from the legacy system.

· One min read
Damian Nadales

High level summary

  • Investigated performance improvements in mempool snapshotting in recent node benchmarks and discussed potential further improvements.
  • Started the review of the UTXO-HD feature branch after all the issues have been resolved.
  • Published io-classes-extra, which hosts concurrency utilities that were extracted from the consensus repository.
  • Elaborated the plan for the last quarter of 2024. You can reach out to our Discord channel for any comments or suggestions.
  • In the context of UTXO-HD, Well-typed presented another LSM-tree milestone. The implementation includes incremental merges, which prevents substantial spikes in resource usage (CPU, disk, memory), and duplicating table handles, which is crucial for efficiently representing sequences of ledger states. The test coverage of the LSM-tree library was improved as well.

· One min read
Noon van der Silk

High-level summary

This last few weeks have seen us spend some time in internal planning, focus hard on incremental commits. We've made good progress on the on-chain validators and associated tests; we continue on with this work. We are also beginning to tackle partial fanout by making some small steps based on the ongoing work of Thomas and others.

What did the team achieve?

  • Small cleanup as part of our first group knowledge-sharing session #1714
  • Progress on the validators and tests for incremental commits #1715, #1664

What's next?

  • Continued work on incremental commits #199
  • Begin work on partial fanout #1468
  • Investigate options for customised ledger in a Hydra Head #1727
  • Continue to support Hydra Doom
  • Continue to plan the 0.20.0 release