Skip to main content

50 posts tagged with "performance-tracing"

View All Tags

· 3 min read
Michael Karg

High level summary

  • Benchmarking: 10.4.1 release benchmarks; UTxO-HD, GC settings and socket I/O feature benchmarks.
  • Development: Abstracting over quick queries and trace queries; enabling query processing on remote hosts.
  • Infrastructure: Workbench simplification merged; GHC8.10 tech debt removed.
  • New Tracing: Provided hotfix for several metrics.

Low level overview

Benchmarking

We've completed release benchmarks for Node 10.4.1. It is the first mainline release of an UTxO-HD node featuring LedgerDB. Leading up to the release, we previously performed and analysed UTxO-HD benchmarks. We were able to document a regression in RAM usage, and assisted in pinpointing its origin, leading to it being fixed swiftly for the 10.4 release.

Additionally, we ran feature benchmarks for a potential socket I/O optimization in the network layer, and GC setting changes catering to the now-default GHC9.6 compiler. Both benchmarks have shown moderate improvements in various performance metrics. This might enable the network team to pick up the optimization for 10.5. Also, we might be able to update the recommended GC settings for block producers, and add them to our own nix service configs for deployment.

The 10.4.1 performance report has been published on Cardano Updates.

Development

We've further evolved the (still experimental) quick query feature of our analysis tool locli. Parametrizable quick queries allow for arbitrary queries into raw benchmarking data, uncovering correlations not part of standard analysis. They are implemented using composable definitions inside a filter-reduce framework. With locli's DB storage backend, we can leverage the DB engine to do much of the work. Now, we're integrating a precursor to quick queries - so called trace queries - into the framework. Those can process raw trace data from archived log files. Currently, we're adding an abstraction layer such that it is opaque to the framework whether the data was retrieved (and possibly pre-processed) from a DB or from raw traces.

Furthermore, we added a custom (CBOR-based) serialization for intermediate results so a query can be evaluated on a remote machine - like the system archiving all benchmarking runs - but triggered, and its results visualized, on your localhost.

Infrastructure

The workbench nix code optimization has finally been merged. Redundant derivations and recursions have been replaced; many nix store entries have been consolidated. Among other things, the new code also aims to maximize nix cache hits. Furthermore, as GHC8.10 has now been officially retired from all build pipelines, we were able to clean up all tech debt in our automations that we had to keep around due to supporting the old compiler version.

Exactly as we had hoped, this has brought down CI time for the Node by orders of magnitude; first, from over an hour to around 15 min, then to under 10 min. Also, all workbench shell invocations are significantly faster, and clutter in the nix store is greatly reduced.

New Tracing

We've been hurrying to provide hotfixes for connectionManager_* and slotsMissed metrics that were faulty on Node 10.3. They have been successfully integrated into the Node 10.4 release.

· 4 min read
Michael Karg

High level summary

  • Benchmarking: 10.3.1 release benchmarks.
  • Development: Plutus script calibration tool and profile maintenance updates about to be merged.
  • Infrastructure: Workbench simplification about to be merged.
  • New Tracing: System dependencies untangled; preparing 'Periodic tracer' feature for production.
  • Node Diversity: Participation in Conformance Testing workshop in Paris.

Low level overview

Benchmarking

We're currently running release benchmarks for the upcoming Node 10.3.1 version - a candidate for Mainnet release. Having taken previous measurements on the release integration branch, we expect the results to be closely aligned with these.

Node 10.3.1 will support two compiler versions, namely GHC8.10.7 and GHC9.6.5. As a consequence, we benchmark both Node builds and compare against the previous performance baseline 10.2. So far, the release benchmarks confirm performance improvements in both resource usage and block production metrics seen on the integration branch - for both compiler versions. A final report will be published on Cardano Updates.

Development

The first version of our new tool calibrate-script is about to be merged. It is part of the tx-generator project, and calibrates Plutus benchmarking scripts according to a range of constraints on the expected workload. The tool documents the result and all intermediate steps in a developer-friendly way. A CSV report is generated which shows all properties of some calibration at a glance: How much execution budget was given, and how much of each execution budget was used, was memory or CPU steps the limiting factor for the script, how large will the resulting transaction be and what will it cost and more. Apart from greatly speeding up development of Plutus benchmarks for our team, this tool can also be used to assess changes to Plutus cost models, or efficiency of different Plutus compiler outputs - without running a full benchmark.

Furthermore, the benchmarking profiles defined in cardano-profile have undergone a large maintenance cycle. Besides a cleanup, several profiles were fixed wrt. transaction fees or duration, others now run on a more appropriate performance baseline. There era-dependency of a profile requiring a minimum protocol version has been solved such that it's now impossible to construct incompatible profiles by definition - e.g. a PlutusV3 benchmark in any era prior to Conway. The correspondig PR is about to be merged shortly.

Infrastructure

A large PR simplifying the build of our performance workbench has been finalized and passed testing. The nix code has been greatly optimized to avoid redundant derivations and creating an abundance of nix store paths. This not only makes the workbench better maintainable, it greatly reduces time and size requirements for CI jobs. In testing, we could observe a speedup of 40% - 50% for CI. Additionally, this PR prepares for the future removal of GHC8.10 as a release compiler - which will reduce CI cost even more. The PR is currently under review and to be merged soon.

New Tracing

The work on untangling dependencies in the new tracing system has entered testing phase. The cardano-tracer service no longer depends on the Node - with common data types and typeclass instances having been refactored to a more basic package of the tracing system. Once merged, this will allow for the service to be built, released and operated independently of cardano-node, widening its range of use cases.

On Node 10.1, we've built a prototype of the 'Periodic tracer' feature. It decorrelates tracing ledger metrics from the start of a block producer's forging loop, thus removing competition on certain synchronization primitives. We've already shown in past benchmarks it had a positive impact on block production performance. This prototype is now being developed for production release, complete with configuration options, and we aim to land it in Node 10.4.

Node Diversity

We've contributed to the recent Conformance Testing workshop in Paris. The topic was how to approach detection and documentation of system behaviour across diverse Cardano Node implementations: Where is the behaviour conforming to some blueprint, where does it deviate - intentionally or accidentally. Our tracing system is the prime provider of observability - and all evidence of program execution could in theory be checked against a machine-readable model of the blueprint. This of course assumes observables are implemented uniformly across diverse Node projects, i.e. without changing semantics. Thankfully, our tracing system lead engineer Jürgen Nicklisch was able to join that workshop and add to the discussions around that approach.

· 5 min read
Michael Karg

High level summary

  • Benchmarking: Preliminary 10.3 benchmarks; GHC8 / GHC9 compiler version comparison; Plutus budget scaling; runtime parameter tuning on GHC9.
  • Development: Started new Plutus script calibration tool; maintenance updates to benchmarking profiles.
  • Infrastructure: Adjusted tooling to latest Cardano API version; simplification of performance workbench nearing completion.
  • New Tracing: Battle-tested metrics monitoring on mainnet; generalized nix service config for cardano-tracer.

Low level overview

Benchmarking

We've run and analyzed several benchmarks these last two weeks:

Preliminary 10.3 integration

As performance improvement is a stated goal for the 10.3 release, we became involved early in the release cycle. Benchmarking the earliest version of the 10.3 integration branch, we could already determine that the effort put in has yielded promising results and confirm improvements in both resource usage and block production metrics. A regular release benchmark will be performed, and published, from the final release tag.

Compiler versions: GHC9.6.5 vs. GHC8.10.7

So far, code generation with GHC9.6 has resulted in a performance regression for block production under heavy load - we've established that in various past benchmarks. The optimization efforts on 10.3 also focused on removing that performance blocker. Benchmarking the integration branch with the newer compiler version has now confirmed it has not only vanished; moreover, code generated with GHC9.6 even exhibited slightly more favourable performance characteristics. So in all likelihood, Node 10.3 will be the last release to support GHC8.10, and we will recommend GHC9.6 as the default build platform for it.

Plutus budget scaling

We've repeated several Plutus budget scaling benchmarks on Node version 10.3 / GHC9.6. By scaling execution budgets to 1.5x and 2x their current mainnet values, we can determine the performance impact on the network of potential increases of said budgets. We independently measured bumping the steps (CPU) limit with a CPU-intensive script, and bumping the memory limit with a script performing lots of allocations. We could observe the performance impact to correspond linearly with the limit bump in each case. This gives certainty and predictability of the impact when suggesting changes to mainnet protocol parameters.

Our team presented those findings and the data to the Parameter Comittee for discussion.

Runtime system (RTS) settings

The recommended RTS settings for cardano-node encompass number of CPU cores to use, behaviour of the allocator, and behaviour of the garbage collector. The recommended settings so far are tuned to GHC8.10's RTS - one cannot assume the same settings are optimal for GHC9.6's RTS, too. So we've started a series of short, exploratory benchmarks comparing a matrix of promising changes to update our recommendation in the future.

Development

We've started to develop a new tool that calibrates our Plutus benchmarking scripts given a range of constraints on the expected workload. These entail exhausting a certain budget (block or transaction), or calibrating for a constant number of transactions per block while exhausting available steps, or memory, budget(s). The result of that directly serves as input to our benchmarking profile definition. This tool may also be of wider interest, as it allows for modifying various inputs, such as Plutus cost models, or script serializations generated by different compilers or compiler versions. That way, one can compare at a glance how effective a given script makes use of the available budgets, given a specific cost model.

Additonally, our benchmarking profiles are currently undergoing a maintenance cycle. This means, setups for which motivation has ceased to exist are removed, several are updated to use the Voltaire performance baseline, others need to be tested for their conformity with the Plomin hard-fork protocol updates.

Infrastructure

The extensive work of simplifying the performance workbench is almost finished and about to enter testing phase. We have been moving away from scripting to declarative (Haskell) definitions of all benchmark profiles and workloads in past PRs. The simplification work now reaps the benefits of that: We can optimize away many recursive / redundant invocations or nix evaluations, we can collate many nix store paths into just a couple ones, reduce the workbench's overall closure size and complexity. Apart from saving significant resources and time for CI runners, this will reduce maintence effort necessary on our end.

Furthermore, we've done maintenance on our tooling by adjusting to the latest changes in cardano-api. This included porting the ProtocolParameters type and type class instances over to us, as our use case requires we continue supporting it. However, it's considered deprecated in the API, and so this unblocks the team for eventually removing it.

New Tracing

Having addressed all feature and change requests relevant for the Node 10.3 release, we performed thorough mainnet testing of the new system's metrics in a monitoring context. We relied on the extremely diligent and helpful feedback from the SRE team. This enabled us to iron out a couple of remaining inconsistencies - a big shout-out and thank you to John Lotoski.

Additionally, again with SRE, a nix service configuration (SVC) has been created for cardano-tracer that was generalized and aligned in structure with the existing cardano-node SVC. It was evolved from the existing SVC in our performance workbench, which however was tied tightly to our team's use case. With the general approach we hope other teams, and the community, can reliably and easily set up and deploy cardano-tracer.

· 3 min read
Michael Karg

High level summary

  • Development: New benchmark epoch timeline using db-sync; raw benchmark data now with DB storage layer as default - enabling quick queries.
  • Infrastructure: Merged workbench 'playground' profiles - easing benchmark calibration.
  • New Tracing: Plenty new features based on community feedback - including a new direct Prometheus backend; untangle system dependencies.
  • Community: Participation in the first episode of the Cardano Dev Pulse podcast.

Low level overview

Development

For keeping a history of comparable benchmarks, it's essential to have an accurate timeline of mainnet protocol parameter updates by epoch. They represent the environment in which specific measurements took place, and are thus tied inherently to the observation process. Additionally, to reproduce specific benchmarking metrics from the past, our performance workbench has the capability to "roll back" those updates, and perform a benchmark given the protocol parameters of any given epoch. Instead of maintaining this epoch timeline by hand, we've now created an automated way to extract all key epochs applying parameter updates using db-sync. This approach will prove both more robust, and having lower maintenance overhead.

Furthermore, the new DB storage backend for raw benchmarking data in locli is now set to be the default for the performance workbench. Apart from cutting down analysis time for a benchmarking run and reducing the required on-disk size for archiving, this enables the new (still under development) quick queries into raw performance data.

Infrastructure

When creating the Plutus memory scaling benchmarks, we developed so-called 'playground' profiles for the workbench. These allow for easier dynamic change of individual profile parameters, building a resulting benchmark setup including Plutus script calibration, and observing the effect in a short local cluster run. Applying these changes to established profiles is strictly forbidden, as it would put comparability with past benchmarks at risk. So by introducing this separation, we keep that safety guarantee, while still lifting it somewhat for the development cycle only.

New Tracing

We've been extremely busy implementing new features and optimizations for the new tracing system, motivated by the feedback we received from the SPO community. This includes:

  • A brand new backend that allows for Prometheus exposition of metrics directly from the application - without running cardano-tracer and forwarding to it.
  • A configurable reconnection interval for the forwarder to cardano-tracer.
  • An always up-to-date symlink pointing to the most recent log file in a cardano-tracer log rotation.
  • Optimizations in metrics forwarding and trace message formatting, which should lower the base CPU usage, at least in low congestion scenarios.

All those will be part of the upcoming Node 10.3 release.

Currently, the cardano-tracer service still depends on the Node for a few data type definitions. We're working on a refactoring so we can untangle this dependency. This will allow for the service to be built independently of the Node - simplifying a setup where other processes and applications can forward observables to cardano-tracer and benefit from its features.

Community

We had the opportunity to talk about benchmarking and performance impact of UTxO-HD on the very first episode of the Cardano Dev Pulse Podcast (YouTube). Thank you Sam and Carlos for having us!

· 4 min read
Michael Karg

High level summary

  • Benchmarking: Plutus memory budget scaling benchmarks; UTxO-HD benchmarks, leading to a fixed regression; Genesis benchmarks.
  • Development: Ouroboros Genesis and UTxO-HD adjustments in workbench; Maintenance tasks.
  • Infrastructure: Removing outdated deployments; Haskell profile definition merged; workbench simplification ongoing.
  • Tracing: C library development ongoing; Feature implementation according to community feedback; test removeal of old system.

Low level overview

Benchmarking

We've run and analyzed scaling benchmarks of Plutus execution budgets. In this series of benchmarks, we measured the performance impact of changes to the memory budgets (both transaction and block). We observed an expected, and reasonable, increase in certain metrics only. Furthermore, we've shown this increase to be linearly correlated to the budget raise. This means that when exploring the headroom of those budgets, the performance cost for the network is alawys predictable. The benchmarks serve as a base for discussing potential changes to those budgets in Mainnet protocol parameters.

When building a performance baseline for UTxO-HD, we were able to locate a regression in its new in-memory backing store, LedgerDB V2. We created a local reproduction of that for the Consensus team, who was able to successfully adress the regression. A corresponding benchmarking report will be published on Cardano Updates.

Furthermore, we've performed benchmarks with the Ouroboros Genesis feature enabled and compared them to the release benchmark baseline. We could not detect any performance risk to the network during "normal operations", i.e. when all nodes are caught up with the chain tip.

Development

During the course of building performance baselines for Ouroboros Genesis and UTxO-HD, we've developed various updates to the performance workbench to correctly handle the new Genesis consensus mode, as well as adjustments to the latest changes in the UTxO-HD node.

Additionally, we built several small quality-of-life improvements for the workbench, as well as investigated and fixed an inconsistent metric (Node Issue #6113).

Infrastructure

The recent maintenance work also extended to the infrastructure: We've removed the dependency on deprecated environment definitions in iohk-nix by porting the relevant configurations over to the workbench. This facilitates a thorough cleanup of iohk-nix by the SRE team.

As the Haskell package defining benchmarking profiles has been merged, and all code replaced by it successfully removed, we're now working very hard on simplifying the interaction between the workbench and nix. This mostly covers removing redundancies that have lost their motivation, applying to both how workbench calls itself recursively multiple times, as well as how (and how many) corresponding nix store paths are created when evaluating derivations. This will both enhance maintainability, and result in much lighter workbench builds locally - but especially on CI runners.

Tracing

Work on the self-contained C library implementing trace forwarding is ongoing. As forwarding is defined in terms of an OuroborosApplication, it's non-trivial to re-implement the feautures of the latter in C as well - such as handshake, version negotiation, and multiplexing. However, for the intended purpose of said library, it is unavoidable.

We've also started working on a new release of cardano-tracer, the trace / metrics consuming service in the new tracing system. This release is geared towards feature or change requests we've received from the community and found very valuable feedback. Having a seperate service to process all trace output enables us to react much quicker to this feedback, and decouple delivery from the Node's release cycle.

Last not least, we're doing an experimental run on creating a build of Node with the legacy tracing system removed. As the system is deeply woven into the code base, and some of the new system's components keep compatibility with the old one, untangling and removing these dependencies is a large endeavour. This build serves as a prototype to identify potential blockers, or decisions to be made, and eventually as a blueprint for removing the legacy system in some future Node release.