Skip to main content

· One min read
James Chapman

The team works on applied research and consulting in formal methods that is directly applicable to evidence based engineering in Core Tech and beyond.

High level summary

The team is formalising mini protocols and also further developing the performance modelling prototype.

Details

  • Developing new framework for specification and verification of mini-protocols which is closer to the Haskell implementation.

  • Developed a new internal representation for the DeltaQ algebra that allows for more modularity in backend implementations

  • Discussions regarding the Cardano networking specification

· 2 min read
Sebastian Nagel

High-level summary

This week, the Hydra team achieved notable progress in various aspects of the project. The team updated the use case section for auctions on the /unstable branch of the website, improving the understanding of Hydras applicability.

From the development side, the team successfully completed event-sourced persistence, a key enhancement in the projects architecture which improves off-chain transaction processing performance. They also added a submit-transaction endpoint to the API.

In addition to project-related progress, the team actively engaged in community reviews for several catalyst proposals related to Hydra and Mithril, contributing to the wider Cardano ecosystem.

Finally, the full report for the month of July was also published here.

What did the team achieve this week

  • Published the monthly report for July
  • Updated the use case section for auctions (published on /unstable branch)
  • Completed event sourced persistence #913
  • Added a submit-transaction endpoint to the API #966
  • Community reviews for several catalyst proposals related to Hydra and Mithril
  • Created a network testing tool (hydra-net) #1006

What are the goals of next week

  • Update hydra-node to work with cardano-node version 8.x
  • Remove the internal commit functionality
  • Release version 0.12.0
  • Update & streamline tutorial to work with latest version of hydra-node

· 2 min read
Alexey Kuleshevich

High level summary

The ledger team was working almost exclusively on the Conway era implementation. In particular, the main focus was directed towards solidifying transaction related types and their binary representation. We also directed some effort into unblocking Plutus team with respect to PlutusV3 integration.

Low level summary

Conway progress

  • pull-3552 - Allow Constitutional Committee Hot Key to be ScriptHash
  • pull-3581 - Make Constitutional Committee Cold Key to be ScriptHash
  • pull-3571 - Implement a portion of the TICKF rule.
  • pull-3556 - Add Script to Constitution
  • pull-3576 - Add optional Anchor to ConwayRegDRep certificate
  • pull-3495 - Implement refund logic for Proposal deposits
  • pull-3579 - Change voting procedure in the transaction to a nested Map
  • pull-3585 - Rename CommitteeCert into a GovCert
  • pull-3587 - Remove DelegStakeTxCert from the COMPLETE pragma for TxCert
  • pull-3586 - Add CurrentTreasuryValue to TxBody
  • pull-3588 - Rename key roles
  • pull-3557 - Update NewCommittee action to use RewardAcnt and add more info
  • pull-3595 - Add ConwayUpdateDRep constructor to ConwayTxCertGov type
  • pull-3600 - Filter out zero TxOuts on Byron/Shelley boundary instead of Babbage/Conway
  • pull-3597 - Update ProposalProcedure return address to be a RewardAcnt

Testing

  • pull-3374 - New features for generation subject to constraints
  • pull-3519 - Basic Conway features test

Bugfixes

Plutus integration

  • issue-3538 - A fairly complete specification was created for the PlutusV3 context
  • pull-3593 - Conway TxInfo for PlutusV3 is now compatible with all pre-Conway functionality

Improvements and releasing

  • pull-3574 - Improve clarity and performance of collateral Non-ADA validation:
  • pull-3573 - Update top-level CHANGELOG.md with cardano-node relevant changes
  • pull-3555 - Bump pygments from 2.12.0 to 2.15.0 in /doc
  • pull-3575 - Bump certifi from 2022.12.7 to 2023.7.22 in /doc
  • pull-3567 - Backport mint field translation bugfix
  • pull-3568 - Fixed typo in byron ledger spec
  • pull-3572 - Release/backport tickf bugfix

· 2 min read
Marcin Szamotulski

High-level overview of sprint 41

24th July - 6th August 2023

We started the implementation of bootstrap peers. Bootstrap peers are designed to provide a safety guarantee for nodes joining the network while still taking advantage of the distributed network for nodes that are synced. This will be an intermediate step before Genesis which will allow for further distribute the system. The bootstrap peers will be run by some trusted partners like CF, Emurgo or IOG. They are primarily designed for leaf nodes (e.g. full node wallets), which often end up syncing and require access to the honest chain. See ouroboros-network#4615 for a more detailed implementation plan.

Other contributions

We started to use nothunks library to discover if we have any unevaluated thunks which can lead to memory leaks ouroboros-network#4633. We found a small one in the peer metric component of the P2P networking stack. Fixing it put us on a small detour of fixing the API of the strict-checked-vars package: cardano-base#431, cardano-base#432, as well as adding NFData instance to io-classes. We also improved nothunks library to make debugging easier and we provided a NoThunks instance for ThreadId which we will need in the future (see nothunks#33).

We released a new version of io-classes (version 1.2.0.0) and related packages to Hackage.

We addressed all review comments on the eclipse evasion PR which introduces big ledger peers, ouroboros-network#3886.

We fixed how SIGHUP signal handlers are registered, so it's not possible to shutdown a node which was starting while trying to update network topology, see cardano-node#5421.

I didn't mention that in the previous update, so here it goes: in the previous sprint we released ouroboros-network-0.8.2.0 and ouroboros-network-framework-0.7.0.0.

· 2 min read
Michael Karg

High level summary

  • Benchmarking: We're adjusting the benchmarking cluster to handle runs for node version 8.2.0.
  • Tracing: We've finished optimization of the new tracing system and added extra robustness with regard to namespacing.
  • Infrastructure: We've been working on making all benchmarking code compliant with the latest GHC9.6.
  • Nomad backend: The new backend has seen adjustments due to a change of underlying hardware. Additionally, we've successfully performed various benchmarking runs on it.

Low level overview

Benchmarking

The 8.2.0 version of cardano-node required adjustment of some of the sanity checks that are part of our benchmarking cluster automation. We've pinpointed the necessary changes and are currently setting up the cluster for the new node version.

Tracing

The optimization efforts for the new tracing system have been completed and have significantly reduced the resource footprint when using it as default for a running node.

A linchpin of the new system is the organization of traces into a namespace hierarchy. This affects configuration, self-documentation as well as rendering of desired trace messages. The new system is now equipped to detect any inconsistency in the whole set of tracers, defined across all components, even if they are never turned on in a running node. This feature adds another layer of robustness to the whole system.

Infrastructure

A potential switch to GHC9.6 (or higher) required some work on our code bases to make it compliant with recent compiler versions. We've future-proofed our benchmarking code.

Nomad backend

The hardware cluster that our nomad backend was accessing has been changed, and we were able to adjust our backend accordingly without touching its higher level abstractions and functionality. Moreover, with the new hardware and cluster setup, certain tasks such as retrieving run artifacts or healthcheck monitoring have become more performant.

The validation phase is ongoing. We were able to perform successful runs and analyses for various 8.x node versions, including 8.2.0-pre. With parallel runs on the current cluster, we hope to measure the same effects we've observed with the nomad backend - which will be a big step towards production use.