Skip to main content

· 2 min read
Jean-Philippe Raynaud

High level overview

The Mithril team released a new 2310.0 distribution that activates the era switch mechanism. They also kept implementing the migration of the aggregator stores to a relational design with the first adaptation of the stake pool store, and then the adaptation of the epoch settings & signed entity type stores. They have implemented the handling of the network API version from the Open API specifications and its automatic switch at era transition. Additionally, they optimized the stake distribution computation that now happens only once per epoch, and also enhanced the client multi-platform workflow to test the Docker images.

Finally, they have successfully completed the tests to create certificates and snapshots on a network running on the Cardano mainnet and they have fixed some bugs.

Low level overview

  • Completed the epic that implements eras behavior switch #707:
    • Completed handling the API version switch at era transition #727
  • Worked on the epic that implements a relational store in the aggregator #779:
    • Completed on the migration/adaptation of the stake_pool table #787
    • Worked on the migration/adaptation of the epoch_settings table #813
    • Worked on the migration/adaptation of the signed-entity-type table #815
    • Completed the creation of a stake distribution service #799
  • Completed the testing of Mithril with Cardano mainnet network #777
  • Completed qualifying the computation of the stake distribution #810
  • Completed the testing of the Docker client in the Mithril Client multi-platform test workflow #794
  • Worked on bugs and optimizations:
    • Fixed a bug that made computation of the stake distribution occur multiple times during an epoch #804
    • Fixed a bug that created deadlocks on the SQLite connection #807
    • Optimized the error message and the behavior of the signer node when KES keys have expired #820
    • Upgraded the infrastructure of the testing-preview and pre-release-preview networks #801
    • Re-genesis of the testing-preview network #803
    • Re-genesis of the pre-release-preview network #818

· One min read
Damian Nadales

High level summary

This week the consensus team continued working on the refactoring of the UTxO HD prototype, and design and testing of Genesis. We also extracted the fs-sim package, which provides a file-system abstraction layer that can be used for testing and simulation. This makes the Consensus code base smaller, while providing a package that the community can reuse and contribute to. We also fixed a failing property test related to iterators. We are also working on mempool and VRF improvements.

Low-level details

· One min read
Jordan Millar

2023-03-22 - 2023-04-05

High level summary

  • Added new cardano-cli ping command which allows users to ping remote cardano-nodes.
  • The transaction build command now can automatically balance multiassets
  • New combinators for constructing transaction bodies. This allows us to construct transaction bodies in a composable manner.

docs

CI & project maintenance

Developer experience

cardano-cli

cardano-api

cardano-node

cardano-testnet

· 3 min read
Michael Karg
  • Benchmarking: We performed benchmarks for the new tracing system, and started benchmarking for varying GHC RTS configurations.
  • New tracing: Backwards compatibility with legacy tracer nomenclature has been merged; we're currently improving documentation and creating setup guidelines for end users.
  • Analysis pipeline: Our refined metrics PR has been merged. We're working on including variance analysis to our reporting machinery.
  • Infrastructure: Support for Conway genesis in our workbench has been merged. At the moment, we're laying the groundwork for enabling GHC 9.2 in our benchmarks.
  • Open Sourcing: The API demo has reached prototype phase; work on documenting the API and providing exemplifying use cases is ongoing.
  • Nomad backend: The nomad-exec based task driver has been merged. The backend has been equipped with the capability for genesis distribution via S3 bucket.

Performance

New tracing

The new tracing system has undergone various benchmarking runs with variance analysis, and comparison to a baseline using legacy tracing. We could observe a slight shift in the resource usage profile from memory to CPU, but no regressions in block propagation metrics. Variance was observed to be notably smaller, which gives the new system a much better predictability. From this angle, we consider the new system fit for production use.

GHC RTS parametrization

We're currently prerforming various runs on the cluster to explore the space of different GHC RTS settings for running nodes. The main focus lies on different configurations for the garbage collector, as well as increasing the number of CPU cores the node may use.

Open Sourcing

Our API demo has reached prototype stage, and operates on live data from the production database. Making use of the experience gained, we're refining version 1 of the API to provide optimized usability, and creating documentation that both is descriptive of the API endpoints, and focuses on practical, exemplary use cases.

Tracing

For the new tracing system we're currently undertaking an effort to multi-layered documentation: a condensed version, as well as a setup guide with pragmatical focus, will be provided alongside the in-depth documentation. This effort should cater to different audiences, and provide distinct entry points for users of the new system, depending on their wants and needs.

Infrastructure & Analysis

General

Having included Conway genesis in the workbench, as a next step in future-proofing out benchmarking infrastructure, we're laying the foundation for a switch in compiler version to GHC 9.2. Additionally, we considered variance analysis of our runs to merit inclusion into our reporting pipeling - which will increase confidence in specific metrics.

Nomad backend

We have implemented an appropriate mechanism for genesis distribution: Only after a benchmarking cluster has been deployed successfully, genesis is patched and uploaded to an AWS S3 bucket for the nodes to retrieve - as a final step before initiating the actual run. We're confident that this deferred approach will provide clearer evidence for genesis patches, as well as minimize startup time for all runs by factoring in deployment re-tries.

· 2 min read
Marcin Szamotulski

High level summary

In the last spring we released cardano-node-1.35.6 with dynamic P2P functionality.

We received reports from some SPOs who encountered problems with their non P2P block producing nodes not being able to connect to their P2P relay. Karl Knutsson (from Cardano Foundation) reproduced this issue between two nodes (a non P2P and a P2P one) on mainnet. Karl and the IOG Networking Team analysed it and found a bug in the legacy non p2p code. The bug is only possible to trigger with a P2P node which is binding its outbound connection port to a fixed IP address and port (default in p2p). A possible solution was found. For more information see #4465.

We released cardano-ping-0.1.0.0 package to CHaP. cardano-ping is no longer available as a standalone binary, but instead it will become part of cardano-cli (see #4664)

We are testing cardano-node with peer sharing functionality (#4019).

We are working on eclipse evasion. We added new class of peers: big ledger peers to the outbound governor, implemented tests and fixed found issues (#4462). We also made the information if a given peer plays the role of a big ledger peer to the mini-protocols. This will allow to modify mini-protocol applications for such peers. As part of this functionality we refactored some core types in the network code which simplifies exposed API.

Together with Moritz Angerman we started to update io-sim to ghc-9.6.1 (see #73).

We merged a fix of configuration of accepted connections limit in cardano-node (see #4902).