Skip to main content

· 2 min read
Damian Nadales

High level summary

  • Well-Typed has presented the penultimate milestone for the lsm-tree library. Table unions are now fully supported, tests include IO-error injection, and compatibility across Linux, Windows, and macOS has been demonstrated.
    • The next milestone will finalize the library's development, enabling its integration as a persistent backend for storing ledger state.
  • Several Consensus team members attended the Peras workshop. There is a brief presentation that can be accessed by following this link (password is GDdL*6M%). A design and implementation plan has been drafted. The next steps for Peras involve a decision by relevant stakeholders (Intersect Technical Steering Committee, involving SPOs and users) regarding the tradeoffs inherent to the protocol, such as additional operational costs, rewards and protocol parameterization regarding settlement times.
  • We held our technical working group meeting (recording), where we discussed:
    • The possibility of incorporating batch VRF support into ouroboros-consensus.
    • CDDL definition for Consensus: draft PR and next steps.
    • Should we support for NTC for older eras? (#1429).
  • Exposed a function that asks Consensus for the versions in which a particular query is supported, offloading this logic from cardano-api (#1437).
  • Exposed Byron cddls to be used from Consensus (#4965).
  • Added QueryStakePoolDefaultVote for 10.3 ([#1434](https://github.com/IntersectMBO/ouroboros-consensus/pull/1434 release consensus for node 10.3)).

· One min read
Noon van der Silk

High-level summary

The team has been working hard on some bugfixes and investigations, as well as a new feature that can be used to "un-stuck" a Hydra head: side-loading snapshots. We're working on some documentation of this feature, and in the coming weeks we will merge several bugfixes, and an implementation of the "Withdraw 0" trick, which will turn out to be very convenient for those wishing to use custom plutus operations on the L2 ledger but still retain the ability to fanout.

What did the team achieve?

  • New --advertise option to bind to public IPs #1874
  • New feature: Side-loading of a snapshot #1858
  • Mirror node investigation #1859
  • Bugfix for memmory reduction #1917

What's next?

  • Documentation for the side-loading snapshot feature #1912
  • Tighten security options of the networking layer #1867
  • Publishing scripts with blockfrost #1668
  • Withdraw 0 trick in L2 ledger #1795
  • Various bugfixes #1916, #1924, #1913, #1915
  • More work on Blockfrost and continued support of Hydra users

· One min read
Jean-Philippe Raynaud

High level overview

This week, the Mithril team released the 2513.0 distribution, which supports Cardano node v.10.2.1 and includes various bug fixes and improvements.

They continued adapting the infrastructure to support the aggregator’s prototype ‘follower’ mode and focused on signing ancillary files in the Cardano database snapshots with an IOG key. They also worked on recording the origin of requests made to the aggregator API.

Finally, the team updated the CIP-0137 mini-protocols and implementation plan and kept refactoring the STM cryptographic library for improved clarity and maintainability.

Low level overview

  • Released the new distribution 2513.0
  • Published a dev blog post about the Distribution 2513 availability
  • Updated the CIP-0137 mini-protocols
  • Completed the issue Release 2513 distribution #2332
  • Worked on the issue Sign ancillary files with IOG key #2362
  • Worked on the issue Record origin of client requests in metrics #2382
  • Worked on the issue Adapt infrastructure for multiple aggregators with leader/follower signer registration #2391
  • Worked on the issue Re-organize STM library structure #2369

· One min read
Marcin Szamotulski

Overview of sprint 84

High-level overview

Mithril Development

We continued to cooperate with the Mithril team. There's a [pull request][PR#7] to update the CIP-0137. We wrote Decentralized Message Queue (DMQ) Implementation Overview.

Tx-Submission

We continued working on tx-submission. We have an experimental branch based on the comming cardano-node-10.3 release which we deployed on mainnet.

Peras Workshop

Neil Davis PNSol and Marcin Szamotulski participated in a Peras Workshop organised by Tweag in their Paris office.

Performance Improvements

  • Karl Kntusson's mux performance PR was merged.
  • Marcin Wójtowicz opened a PR with inbound governor performance improvements.

Pull requests

In review

Work in Progress

Merged

· 5 min read
Michael Karg

High level summary

  • Benchmarking: Preliminary 10.3 benchmarks; GHC8 / GHC9 compiler version comparison; Plutus budget scaling; runtime parameter tuning on GHC9.
  • Development: Started new Plutus script calibration tool; maintenance updates to benchmarking profiles.
  • Infrastructure: Adjusted tooling to latest Cardano API version; simplification of performance workbench nearing completion.
  • New Tracing: Battle-tested metrics monitoring on mainnet; generalized nix service config for cardano-tracer.

Low level overview

Benchmarking

We've run and analyzed several benchmarks these last two weeks:

Preliminary 10.3 integration

As performance improvement is a stated goal for the 10.3 release, we became involved early in the release cycle. Benchmarking the earliest version of the 10.3 integration branch, we could already determine that the effort put in has yielded promising results and confirm improvements in both resource usage and block production metrics. A regular release benchmark will be performed, and published, from the final release tag.

Compiler versions: GHC9.6.5 vs. GHC8.10.7

So far, code generation with GHC9.6 has resulted in a performance regression for block production under heavy load - we've established that in various past benchmarks. The optimization efforts on 10.3 also focused on removing that performance blocker. Benchmarking the integration branch with the newer compiler version has now confirmed it has not only vanished; moreover, code generated with GHC9.6 even exhibited slightly more favourable performance characteristics. So in all likelihood, Node 10.3 will be the last release to support GHC8.10, and we will recommend GHC9.6 as the default build platform for it.

Plutus budget scaling

We've repeated several Plutus budget scaling benchmarks on Node version 10.3 / GHC9.6. By scaling execution budgets to 1.5x and 2x their current mainnet values, we can determine the performance impact on the network of potential increases of said budgets. We independently measured bumping the steps (CPU) limit with a CPU-intensive script, and bumping the memory limit with a script performing lots of allocations. We could observe the performance impact to correspond linearly with the limit bump in each case. This gives certainty and predictability of the impact when suggesting changes to mainnet protocol parameters.

Our team presented those findings and the data to the Parameter Comittee for discussion.

Runtime system (RTS) settings

The recommended RTS settings for cardano-node encompass number of CPU cores to use, behaviour of the allocator, and behaviour of the garbage collector. The recommended settings so far are tuned to GHC8.10's RTS - one cannot assume the same settings are optimal for GHC9.6's RTS, too. So we've started a series of short, exploratory benchmarks comparing a matrix of promising changes to update our recommendation in the future.

Development

We've started to develop a new tool that calibrates our Plutus benchmarking scripts given a range of constraints on the expected workload. These entail exhausting a certain budget (block or transaction), or calibrating for a constant number of transactions per block while exhausting available steps, or memory, budget(s). The result of that directly serves as input to our benchmarking profile definition. This tool may also be of wider interest, as it allows for modifying various inputs, such as Plutus cost models, or script serializations generated by different compilers or compiler versions. That way, one can compare at a glance how effective a given script makes use of the available budgets, given a specific cost model.

Additonally, our benchmarking profiles are currently undergoing a maintenance cycle. This means, setups for which motivation has ceased to exist are removed, several are updated to use the Voltaire performance baseline, others need to be tested for their conformity with the Plomin hard-fork protocol updates.

Infrastructure

The extensive work of simplifying the performance workbench is almost finished and about to enter testing phase. We have been moving away from scripting to declarative (Haskell) definitions of all benchmark profiles and workloads in past PRs. The simplification work now reaps the benefits of that: We can optimize away many recursive / redundant invocations or nix evaluations, we can collate many nix store paths into just a couple ones, reduce the workbench's overall closure size and complexity. Apart from saving significant resources and time for CI runners, this will reduce maintence effort necessary on our end.

Furthermore, we've done maintenance on our tooling by adjusting to the latest changes in cardano-api. This included porting the ProtocolParameters type and type class instances over to us, as our use case requires we continue supporting it. However, it's considered deprecated in the API, and so this unblocks the team for eventually removing it.

New Tracing

Having addressed all feature and change requests relevant for the Node 10.3 release, we performed thorough mainnet testing of the new system's metrics in a monitoring context. We relied on the extremely diligent and helpful feedback from the SRE team. This enabled us to iron out a couple of remaining inconsistencies - a big shout-out and thank you to John Lotoski.

Additionally, again with SRE, a nix service configuration (SVC) has been created for cardano-tracer that was generalized and aligned in structure with the existing cardano-node SVC. It was evolved from the existing SVC in our performance workbench, which however was tied tightly to our team's use case. With the general approach we hope other teams, and the community, can reliably and easily set up and deploy cardano-tracer.