Skip to main content

48 posts tagged with "performance-tracing"

View All Tags

· 3 min read
Michael Karg

High level summary

  • Benchmarking: Continued benchmarking of UTxO-HD and performed benchmarks for the new tracing system.
  • Consensus QTAs: Our protoype approach is applied to potential regression fixes with GHC 9.2.7.
  • Development: We've developed strategies for future benchmarks of PlutusV3 and UTxO-HD's on-disk backing store.
  • Tracing: The machine-readable tracer configuration has been merged. Optimization of cardano-tracer started.
  • Nomad backend: Ongoing variance analysis and refined cluster topology.

Low level overview

Benchmarking

Performing and analyzing benchmarks for the UTxO-HD feature is an ongoing effort; we can reliably assess the performance of the in-memory backing store and evaluate possible optimizations (or regressions) for it.

Furthermore, benchmarks of our new tracing system after several rounds of optimization have been performed. The results show all key metrics now being unaffected by the choice of tracing system (legacy or new) - with the new system being able to provide more features and flexibility in comparison. The benchmarks also highlighted further points for optimization, with the focus now on the cardano-tracer service.

Consensus QTAs

The Quantitative Timeliness Agreements (QTA) prototype is being used in coordination with Consensus and DevX to validate a series of patches that address optmization opportunities which GHC8.10 seizes, but GHC9.2 misses. The feedback from this approach is much more immediate than running benchmarks at system integration level. But once we eventually do, we expect to reproduce the relevant observations - which would mean a big step towards maturing the prototype.

Development

Benchmarking UTxO-HD's on-disk backing store needs special attention: in virtualized environments, disk I/O is not a reliable metric as it passes several layers of indirection. As this is the very metric which will influence overall performance of this UTxO-HD flavour, we developed a plan to monitor such nodes, connected to a running network, on dedicated hardware - having direct SSD access. Replicating this setup for an entire benchmarking cluster of such nodes will be a future effort.

PlutusV3 will come with new builtins and a new cost model. It will take a specialized benchmark to ascertain the soundness of that model running a full cluster of nodes, possibly stressing expensive builtins. At the same time, we'd like to validate the many improvements that have gone into the Plutus evaluator.

Tracing

The focus for further optimization of the new tracing system has shifted to cardano-tracer - the service receiving and processing traces from one (or more) nodes. Whilst undisputed that the code living in cardano-node is more performance critical, the receiving service must still minimize its resource footprint. Moreover, it can generate load for a running node when querying data points from it - which calls for tight control of that mechanism and its possible configurations.

Nomad backend

Variance analysis of new nomad backend has revealed a necessary adjustment of the cluster's topology. We repeated the same analysis and now see even better confidence in the measurements taken with nomad. This concludes the work on the backend proper for the time being. The last steps before production use will focus on the interface between backend and our workbench, which provides all high-level benchmark definitions and analysis machinery.

· 3 min read
Michael Karg

High level summary

  • Benchmarking: We've performed both low-level network and high-level variance analysis of our benchmarking clusters.
  • Infrastructure: Our reporting pipeline was adjusted to classify various workloads easily reducing rework time.
  • Tracing: Work on machine-readable tracing of tracer configuration is ongoing.
  • Nomad backend: We've been able to eliminate several possible confounders on the nomad cluster.
  • Team: We're currently onboarding a new team member: Welcome to Cardano Performance & Tracing, Baldur Blöndal!

Low level overview

Benchmarking

As part of the effort to bring the Nomad backend into production use, we've been equipping both that and the existing benchmarking backend with means to measure and document network latency for each run. Furthermore we've implemented means to capture TCP packets for a limited time window during a benchmarking run - which will allow us to spot differences in the behaviour of the underlying networking stack at OS level.

Additionally, we're running variance analysis in parallel on both backends to ascertain confidence in metrics originating from either. We've concluded that baseline profile runs aren't directly comparable between the two, so we decided to compare standard deviations instead to validate the measurements from nomad.

Infrastructure

Reporting on benchmarks does require human time and effort to rework the final document. Improvements to the reporting pipeline have been merged to master. They reduce the time necessary to do so by various changes to the template and the workload classification logic in analysis.

Beyond that, we've looked into issues where services would quit with an unjustified exit failure upon shutdown - under rare circumstances. By reworking shutdown logic for trace-dispatcher and tx-generator we were able to address those issues.

Tracing

After various steps in constructing a configuration upon node startup, it is vital to document which runtime configuration the node arrived eventually. We're working on providing a machine-readable JSON/YAML trace message for that purpose.

This will facilitate hot-reloading a node's tracer configuration in the future: users will be able to take such a trace message, apply their intended change and hot-reload it immediately into the node.

Nomad backend

As with the existing benchmarking cluster, nomad is currently under scrutiny with regard to the reliability of metrics it produces, as well as the behaviour of its OS-level network stack. For instance, differing kernel versions can have an impact on our measurements, as we'd be basically using two different instruments to take them.

Along the way we've already been successful in eliminating some possible confounders that had been introduced by the nomad service or the slightly different system architecture of the new cluster.

New team member

Baldur Blöndal is an extremely capable and experienced Haskell developer. Also, he's an excellent fit for our existing team. So I'm very pleased to welcome him onboard with IOG, and with Performance & Tracing. He will be working on cardano-tracer, the component receiving, processing and making available node traces and metrics.

· 3 min read
Michael Karg

High level summary

  • Benchmarking: We've performed and analysed feature benchmarks for both UTxO-HD and the current P2P stack.
  • Infrastructure: Various improvements of our analysis pipeline have been merged to master, supporting safe log truncation.
  • Tracing: Namespace consistency checks have been merged to master along with a curated configuration for benchmarking.
  • Nomad backend: We're productively using the new backend to measure new vs. legacy tracing system, adding many quality-of-life improvements.

Low level overview

Benchmarking

We've completed various runs and analyses targeting two distinct features of the node: UTxO-HD and Peer2Peer.

With our UTxO-HD benchmark we could clearly localize one point where this new way of maintaining ledger state is still costly, but at the same time confirm that in basically all other aspects UTxO-HD makes no difference in performance.

The Peer2Peer benchmarks focused on the effects that enabling this feature on a block producing node has on propagation times, as well as scrutinized a proposed change to the Peer2Peer network stack.

Infrastructure

As a result of optimizing in-memory representation of log objects, which are constructed from a node's traces, we can now analyse runs that last longer in total. For runs that exceed their expected duration, analysis now supports a truncation operation that keeps the interdependencies of block events intact.

Truncation might happen at a slightly different point in time - and therefore in its log object stream - for each node in the cluster. An additional step validating the block hash timeline of the cluster has been implemented for the pipeline. It provides early feedback on whether a specific truncation will lead to a valid full analysis, which requires much more time.

Tracing

Consistency checking of namespace implementation and configuration when using the new system has been completed. This feature enables feedback on when tracer implementation details in some component might have changed. It's also able to detect when a configuration used for operating a cardano-node shows inconsistencies with the namespaces the system provides - and hence needs attention.

Furthermore, we've created a fine-grained configuration of the new system that caters to benchmarking's need of very many detailed trace messages. It's aimed at mirroring the same amount of trace messages, and information, we're seeing from our usage of the legacy system; an important step in making benchmarks between systems comparable.

Nomad backend

The new backend is currently being used for further validation with regard to the existing cluster. Additionally, we're using it in production mode to comparatively benchmark both tracing systems after merging past month's optimizations - which is the first real-life application of the nomad cluster. Hands-on experience in that phase translates into many small improvements which can be immediately applied to enhance user experience for the new backend.

· 2 min read
Michael Karg

High level summary

  • Benchmarking: Benchmarking node version 8.2.1 has concluded. Additionally we're developing benchmarking setups for GHC9.6 and UTxO-HD.
  • Infrastructure: Our analysis pipeline has received improvements reducing memory footprint.
  • Tracing: Another batch of optimizations for the new tracing system has been merged; work on namespace consistency guarantees is ongoing.
  • Nomad backend: We're performing and analysing various runs for validation purposes on the new hardware cluster.

Low level overview

Benchmarking

We've performed and analysed the benchmarks for the 8.2.1 version of cardano-node as part of our release benchmarking cycle.

Setting up cluster benchmarks requires completing full system integration. This applies to both supporting a new build platform, as is GHC9.6, as well as targeting a specific feature, like a UTxO-HD enabled node. Currently, we're working on respective integrations on both those paths.

Infrastructure

As cluster runs increase in duration, more and more data is accumulated for analysis. Batch analysis mode needs all data to be held in memory, which wouldn't fit anymore even on a 64GB RAM machine. Changes to the in-memory data representation improving on compactness were able to reduce the RAM requirements of our analysis pipeline.

Tracing

The next portion of optimizations has been completed and merged to master, getting rid of Haskell's native String representation on critical code paths. This concludes the optimization phase of the new tracing system for all its components used by cardano-node.

The implementation for validation of consistent naming and configuration is ongoing. We're splitting out everything that's verifiable at compile time into a seperate test case which we hope to integrate into CI - leaving only configuration constraints to be verified at or before node startup.

Nomad backend

The verification phase of the nomad cloud backend is ongoing. We're able to perform full runs on the new hardware cluster and porting profiles and configurations from the legacy one. The goal is to reproduce with confidence known regressions, or improvements, between runs performed on the legacy cluster and runs performed with the new backend.

· 2 min read
Michael Karg

High level summary

  • Benchmarking: We've concluded benchmarking node version 8.2.0.
  • Tracing: Optimization of the new tracing system has been merged; we're currently working on self-documenting tracing configuration.
  • Nomad backend: A PR that makes our backend take advantage of added flexibility of the new hardware cluster has been merged.

Low level overview

Benchmarking

As part of our release benchmarking cycle, we've completed and analysed the runs for the 8.2.0 version of cardano-node. In addition to the adjustment of sanity checks in our automation, we had to implement small changes in the analysis pipeline as well to accomodate the new version.

Tracing

A significant amount of optimizations for the new tracing system has finally been merged to master. At the moment, we're working on having a trace message self-document the final tracing configuration of a running node. Apart from adding insight into the system, this feature also aims at making future hot reloading of tracing configuration explicit and straightforward.

Furthermore, we're setting up a final round of system integration level benchmarks comparing new against legacy tracing.

Nomad backend

The new hardware cluster permits greater flexibility as far as SSH access is concerned. By using nomad for a consistent and reliable deployment, but taking advantage of direct connections for healthchecks and data transfer we believe we were able to reduce overall network latency in the nomad cluster. This improves confidence when capturing all network related measurements during our benchmarks.

A PR that adds these capabalities to our nomad backend - along with very many quality-of-life improvements - has been merged to master.