Skip to main content

· 2 min read
Jean-Philippe Raynaud

High level overview

This week, the Mithril team continued working on a prototype 'slave' mode of the aggregator to support signer registration across multiple aggregators. They also focused on implementing incremental certification of the Cardano database, emphasizing feature stabilization, production readiness, testing, and optimization. Additionally, they worked on signing ancillary files in the Cardano database snapshots with an IOG key.

Finally, the team fixed failing builds in the Hydra CI and kept working on supporting the upcoming Cardano node v.10.2.

Low level overview

  • Completed the issue Implement a slave mode for the signer registration in the aggregator #2334
  • Completed the issue Add integration test for mode for the signer registration in the aggregator #2335
  • Completed the issue Optimize artifact downloads when restoring an Incremental Cardano DB #2327
  • Completed the issue Document Incremental Cardano DB usage in the clients #2329
  • Completed the issue Use consistent naming in the client Cardano database APIs #2248
  • Completed the issue Create a Code ADR record in the repository #2342
  • Completed the issue Hydra CI fails with OpenSSL error on Linux x86_64 runners #2295
  • Worked on the issue Sign ancillary files with IOG key #2362
  • Worked on the issue Compress the digests file uploaded on GCP for Incremental Cardano DB #2356
  • Worked on the issue Database migration checks minimum node version if next migration is squashed #2357
  • Worked on the issue E2e tests adaptation for multiple aggregators #2361
  • Worked on the issue Upgrade to Cardano 10.2 #2333

· 3 min read
Michael Karg

High level summary

  • Development: New benchmark epoch timeline using db-sync; raw benchmark data now with DB storage layer as default - enabling quick queries.
  • Infrastructure: Merged workbench 'playground' profiles - easing benchmark calibration.
  • New Tracing: Plenty new features based on community feedback - including a new direct Prometheus backend; untangle system dependencies.
  • Community: Participation in the first episode of the Cardano Dev Pulse podcast.

Low level overview

Development

For keeping a history of comparable benchmarks, it's essential to have an accurate timeline of mainnet protocol parameter updates by epoch. They represent the environment in which specific measurements took place, and are thus tied inherently to the observation process. Additionally, to reproduce specific benchmarking metrics from the past, our performance workbench has the capability to "roll back" those updates, and perform a benchmark given the protocol parameters of any given epoch. Instead of maintaining this epoch timeline by hand, we've now created an automated way to extract all key epochs applying parameter updates using db-sync. This approach will prove both more robust, and having lower maintenance overhead.

Furthermore, the new DB storage backend for raw benchmarking data in locli is now set to be the default for the performance workbench. Apart from cutting down analysis time for a benchmarking run and reducing the required on-disk size for archiving, this enables the new (still under development) quick queries into raw performance data.

Infrastructure

When creating the Plutus memory scaling benchmarks, we developed so-called 'playground' profiles for the workbench. These allow for easier dynamic change of individual profile parameters, building a resulting benchmark setup including Plutus script calibration, and observing the effect in a short local cluster run. Applying these changes to established profiles is strictly forbidden, as it would put comparability with past benchmarks at risk. So by introducing this separation, we keep that safety guarantee, while still lifting it somewhat for the development cycle only.

New Tracing

We've been extremely busy implementing new features and optimizations for the new tracing system, motivated by the feedback we received from the SPO community. This includes:

  • A brand new backend that allows for Prometheus exposition of metrics directly from the application - without running cardano-tracer and forwarding to it.
  • A configurable reconnection interval for the forwarder to cardano-tracer.
  • An always up-to-date symlink pointing to the most recent log file in a cardano-tracer log rotation.
  • Optimizations in metrics forwarding and trace message formatting, which should lower the base CPU usage, at least in low congestion scenarios.

All those will be part of the upcoming Node 10.3 release.

Currently, the cardano-tracer service still depends on the Node for a few data type definitions. We're working on a refactoring so we can untangle this dependency. This will allow for the service to be built independently of the Node - simplifying a setup where other processes and applications can forward observables to cardano-tracer and benefit from its features.

Community

We had the opportunity to talk about benchmarking and performance impact of UTxO-HD on the very first episode of the Cardano Dev Pulse Podcast (YouTube). Thank you Sam and Carlos for having us!

· One min read
Noon van der Silk

High-level summary

The team are very excited to have merged the etcd-based networking stack into master (not yet released). We would appreciate people testing this and reporting any issues! We continue to work on memory usage and potential stuck-head resolutions.

What did the team achieve?

  • Merged the etcd-based network stack #1720
  • Progress on reduced memory footprint for running a Hydra Node #1618
  • Progress on "side-load" of a snapshot #1858

What's next?

  • Continue to work on memory usage enhancements #1618
  • Finish side-loading snapshots #1858
  • Tighten security options of the networking layer #1867
  • Publishing scripts with blockfrost #1668

· 2 min read
Marcin Szamotulski

Overview of sprint 82

Extensible Ouroboros Diffusion

We merged the extensible diffusion PR.

This PR will allow us to build mithril diffusion using ouroboros-network, and more generally, it makes it much easier to develop and deploy one's own decentralised applications based on ouroboros-network. This is part of the ouroboros-network-0.20 release, which will be included in cardano-node-10.3.

Ouroboros-Network-0.20

We released ouroboros-network-0.20 to CHaP. All released changes are listed in [this table][on-0.20]. The most important changes are:

Removal of NonP2P code base

We merged Removal of NonP2P Network Components PR. This change will be integrated no sooner than cardano-node-10.4. If you're still using In the NonP2P mode, please upgrade to P2P. Initiator-only mode for local root peers (#5020) is available in the pre-release cardano-node-10.2.1 and future releases. See the, to be published, documentation update.

· One min read
Damian Nadales

High level summary

  • Added a significant amount of content to the Consensus blueprint documentation. There are new sections that describe different aspects of the Consensus layer (such as chain selection or ledger queries), Storage layer, and Mempool.
  • Javier and Nick, two of the Consensus team members participated in the Cardano Dev Pulse Podcast where they discussed UTXO-HD and Genesis.