Skip to content

Comments

feat(pkg/da): support celestia-app#3095

Draft
julienrbrt wants to merge 12 commits intomainfrom
julien/app
Draft

feat(pkg/da): support celestia-app#3095
julienrbrt wants to merge 12 commits intomainfrom
julien/app

Conversation

@julienrbrt
Copy link
Member

Add direct support to celestia-app alongside celestia-node.

@claude
Copy link
Contributor

claude bot commented Feb 20, 2026

Claude finished @julienrbrt's task in 2m 40s —— View job


Code Review: feat(pkg/da): support celestia-app

Tasks

  • Gather context and understand the PR
  • Review new pkg/da/app/client.go (celestia-app client)
  • Review refactored pkg/da/node/client.go (celestia-node client)
  • Review pkg/da/types/types.go (BlobClient interface)
  • Review pkg/da/factory/factory.go (DA factory)
  • Review block/internal/da/ changes
  • Review test coverage
  • Summarize findings

Overview

This PR introduces a clean architectural refactoring by extracting a BlobClient interface and adding a new celestia-app client backed by CometBFT RPC. The direction is sound and the code is generally well-structured. However, there are a few critical functional gaps and subtle regressions that should be addressed before merging.


Critical Issues

1. pkg/da/app/client.go:76Submit is permanently unimplemented

The Submit method always returns StatusError for any valid input:

return datypes.ResultSubmit{
    BaseResult: datypes.BaseResult{
        Code:    datypes.StatusError,
        Message: "Submit not implemented: requires transaction signing...",
    },
}

This means celestia-app is a read-only DA client. Nodes configured as aggregators (IsAggregator: true) are always redirected to the node client by the factory, but non-aggregator nodes can also call Submit indirectly (e.g., via forced inclusion). Any node that somehow ends up using the app client for submission will fail silently at the interface level.

The PR title says "support celestia-app" but submission — the core operation of a DA layer for a sequencer — is missing. This should either be implemented (requires signer integration) or the factory should hard-fail if an aggregator tries to force ClientTypeApp. Fix this →

2. pkg/da/node/client.go:62-65gasPrice and options are silently dropped

var opts SubmitOptions
if len(options) > 0 {
    // Options would be parsed here if needed
}

The gasPrice parameter is also discarded. SubmitOptions has a GasPrice field and IsGasPriceSet bool that were clearly designed to carry this value. Callers setting a gas price will get unexpected behavior with no error.

Fix this →


High Priority Issues

3. pkg/da/node/client.go:70-75 — Lost detailed error mapping from the old client

The previous jsonrpc/client.go mapped RPC errors to specific status codes (StatusTooBig, StatusAlreadyInMempool, StatusIncorrectAccountSequence, etc.). The new implementation returns a single generic StatusError for all Blob.Submit failures. This degrades the node's ability to react to specific DA conditions (e.g., correctly backing off vs. retrying on StatusAlreadyInMempool).

4. pkg/da/factory/factory.go:106-124 — Auto-detection silently falls back to node

// Default to node client if detection fails
return ClientTypeNode

If the address is a valid celestia-app endpoint that's temporarily unreachable during startup, the factory silently falls back to creating a node client, which will then fail differently. This can make misconfiguration very hard to debug. Consider logging a warning, or when ForceType == ClientTypeAuto, returning an error on detection failure rather than a misleading default. Fix this →

5. pkg/da/factory/factory.go:259-264 — Timeout not propagated to app client

func createAppClient(cfg Config) datypes.BlobClient {
    return daapp.NewClient(daapp.Config{
        RPCAddress:     cfg.Address,
        Logger:         cfg.Logger,
        DefaultTimeout: 0, // Use default  ← intentionally drops the user's setting
    })
}

The configured timeout in factory.Config is never passed to the app client. This should forward cfg's timeout if one exists, or accept a DefaultTimeout field on the factory Config.


Medium Issues

6. pkg/da/app/client.go:160,173 — Fragile string-based error detection

if strings.Contains(err.Error(), "height") && strings.Contains(err.Error(), "future") { ... }
if strings.Contains(err.Error(), "is not available, lowest height is") { ... }

CometBFT error messages are not part of its stable API. These patterns could break silently on CometBFT version upgrades, causing StatusHeightFromFuture errors to be returned as generic StatusError (or vice versa). Consider matching on rpcError.Code values (CometBFT uses numeric error codes) rather than message text.

7. pkg/da/app/client.go:209-213 — Base64 fallback logic is incorrect

txBytes, err := base64.StdEncoding.DecodeString(tx)
if err != nil {
    // Try raw bytes if not base64
    txBytes = []byte(tx)
}

When tx is a string containing non-base64 data, []byte(tx) simply converts the UTF-8 string to bytes — it doesn't decode any binary format. This silently masks base64 decode errors and processes garbled data. If CometBFT always returns base64-encoded transactions, the fallback should be removed and the error should be logged.

8. pkg/da/node/client.go:155-158 — Silent nil commitment in Get

height, commitment := SplitID(id)
if commitment == nil {
    continue  // silently skips
}

The local SplitID (in blob.go) returns nil commitment for malformed IDs, and the loop skips them silently. This could cause Get to return fewer results than IDs requested with no indication. The datypes.SplitID counterpart returns an explicit error; consider using that or logging the skip.


Low Priority / Nitpicks

9. pkg/da/app/client.go:57-61NewClient returns nil for empty address

if cfg.RPCAddress == "" {
    return nil
}

Returning nil instead of an error makes it harder for callers to distinguish "intentionally no client" from a bug. The caller at factory.go:260 doesn't check for nil, so a blank address would propagate as a nil BlobClient and panic at first use.

10. pkg/da/app/client.go:195-200 — Shadowed err variable

blockTime := time.Now()
if blockResult.Block.Header.Time != "" {
    if t, err := time.Parse(time.RFC3339Nano, blockResult.Block.Header.Time); err == nil {

The inner err shadows the outer one. This is syntactically valid but can be confusing. Using a distinct name like parseErr would be clearer.

11. pkg/da/factory/factory.go:127-185 — Double HTTP client per detection

Both isNodeEndpoint and isAppEndpoint create their own http.Client with a 5s timeout and add a context.WithTimeout. The HTTP client-level timeout is redundant when the context already has a deadline. Minor, but consider using a shared client or just relying on the context.

12. pkg/da/node/client_test.go — Limited high-level Submit/Retrieve test coverage

The test in client_test.go verifies RPC forwarding at the BlobAPI level but doesn't test the Submit/Retrieve/GetProofs/Validate methods of the Client struct (the BlobClient implementation). Edge cases like error status code translation are not covered.


What's Well Done

  • The BlobClient interface is clean and well-scoped — it correctly separates blob operations from namespace wiring.
  • Compile-time interface assertions (var _ datypes.BlobClient = (*Client)(nil)) in both clients.
  • The factory auto-detection pattern is a nice UX improvement; users don't need to know which client to configure.
  • The celestia-app client is correct to return explicit errors for GetProofs/Validate rather than silently returning empty results.
  • Context propagation is consistent and correct throughout.
  • The tracing layer (block/internal/da/tracing.go) is properly extended for GetLatestDAHeight.
  • pkg/da/app/client_test.go has solid coverage for the new code paths.
  • The RetrieveForcedInclusion wrapper in block/internal/da/client.go cleanly handles the "namespace not configured" case.

@github-actions
Copy link
Contributor

github-actions bot commented Feb 20, 2026

The latest Buf updates on your PR. Results from workflow CI / buf-check (pull_request).

BuildFormatLintBreakingUpdated (UTC)
✅ passed⏩ skipped✅ passed✅ passedFeb 20, 2026, 11:19 PM

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @julienrbrt, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the Data Availability (DA) layer by introducing a new client for direct communication with celestia-app via CometBFT RPC, alongside the existing celestia-node client. This change is underpinned by a new BlobClient interface, which abstracts DA operations and allows for flexible switching between different DA implementations. The existing jsonrpc client was renamed to node for clarity, and the core DA client logic was generalized to work with any BlobClient implementation. This improves modularity, simplifies future DA integrations, and ensures the system is more adaptable to evolving DA layer technologies.

Highlights

  • New Celestia-App DA Client: Introduced a new Data Availability (DA) client (pkg/da/app) that communicates directly with celestia-app via CometBFT RPC, providing an alternative to the celestia-node client.
  • DA Client Abstraction: Refactored the DA layer interaction by introducing a BlobClient interface (pkg/da/types/types.go), enabling a pluggable architecture for different DA implementations.
  • Celestia-Node Client Package Renamed: The existing celestia-node JSON-RPC client package pkg/da/jsonrpc was renamed to pkg/da/node for improved clarity and consistency.
  • Extended DA Client Interface: The core DA client interface (block/internal/da/interface.go) was extended to include a GetLatestDAHeight method, which is now implemented across various DA client types.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • .mockery.yaml
    • Updated mock generation paths for the renamed pkg/da/node package.
  • apps/evm/cmd/post_tx_cmd.go
    • Updated import path and client instantiation for the renamed pkg/da/node package.
  • apps/evm/cmd/run.go
    • Updated import path and client instantiation for the renamed pkg/da/node package.
  • apps/evm/server/force_inclusion_test.go
    • Added GetLatestDAHeight method to the mockDA struct.
  • apps/grpc/cmd/run.go
    • Updated import path and client instantiation for the renamed pkg/da/node package.
  • apps/testapp/cmd/run.go
    • Updated import path and client instantiation for the renamed pkg/da/node package.
  • block/internal/da/client.go
    • Refactored to use the new datypes.BlobClient interface and delegate DA operations.
    • Removed direct jsonrpc client dependencies and DefaultTimeout configuration.
  • block/internal/da/client_test.go
    • Removed the test file as its functionality is now covered by the generalized DA client.
  • block/internal/da/interface.go
    • Added GetLatestDAHeight to the Client interface.
  • block/internal/da/tracing.go
    • Implemented GetLatestDAHeight for the tracedClient with OpenTelemetry tracing.
  • block/internal/da/tracing_test.go
    • Added GetLatestDAHeight to the mockFullClient for testing.
  • block/public.go
    • Updated NewDAClient to accept the new datypes.BlobClient interface instead of the specific blobrpc.Client.
  • pkg/cmd/run_node.go
    • Updated import path and client instantiation for the renamed pkg/da/node package.
    • Adjusted NewDAClient call to use the new BlobClient interface.
  • pkg/da/app/client.go
    • Added a new Client implementation for direct communication with celestia-app via CometBFT RPC.
    • Implemented Submit (with a note about missing signing), Retrieve, Get, and GetLatestDAHeight methods for celestia-app.
  • pkg/da/jsonrpc/README.md
    • Renamed to pkg/da/node/README.md.
  • pkg/da/jsonrpc/blob.go
    • Renamed to pkg/da/node/blob.go and updated package declaration.
  • pkg/da/jsonrpc/blob_test.go
    • Renamed to pkg/da/node/blob_test.go and updated package declaration and imports.
  • pkg/da/jsonrpc/client.go
    • Removed the jsonrpc client implementation, replaced by pkg/da/node/client.go.
  • pkg/da/jsonrpc/client_test.go
    • Renamed to pkg/da/node/client_test.go and updated package declaration and imports.
  • pkg/da/jsonrpc/header.go
    • Renamed to pkg/da/node/header.go and updated package declaration.
  • pkg/da/jsonrpc/mocks/blob_module_mock.go
    • Renamed to pkg/da/node/mocks/blob_module_mock.go and updated package declaration and imports.
  • pkg/da/jsonrpc/mocks/header_module_mock.go
    • Renamed to pkg/da/node/mocks/header_module_mock.go and updated package declaration and imports.
  • pkg/da/jsonrpc/module.go
    • Renamed to pkg/da/node/module.go and updated package declaration.
  • pkg/da/jsonrpc/submit_options.go
    • Renamed to pkg/da/node/submit_options.go and updated package declaration.
  • pkg/da/jsonrpc/types.go
    • Renamed to pkg/da/node/types.go and updated package declaration.
  • pkg/da/node/client.go
    • Added the celestia-node client implementation, previously pkg/da/jsonrpc/client.go, now implementing the datypes.BlobClient interface.
  • pkg/da/types/types.go
    • Defined the new BlobClient interface to standardize DA layer operations.
  • pkg/rpc/server/da_visualization.go
    • Updated import path for the renamed pkg/da/node package.
  • test/e2e/da_client_integration_test.go
    • Updated imports and client instantiation to use the pkg/da/node package.
  • test/e2e/da_posting_integration_test.go
    • Updated imports to use the pkg/da/node package.
  • test/e2e/evm_force_inclusion_e2e_test.go
    • Updated imports and client instantiation to use the pkg/da/node package.
  • test/e2e/failover_e2e_test.go
    • Updated imports and client instantiation to use the pkg/da/node package.
  • test/mocks/da.go
    • Added GetLatestDAHeight to the MockClient interface.
  • test/testda/dummy.go
    • Implemented GetLatestDAHeight for the DummyDA client.
  • tools/da-debug/main.go
    • Updated imports and client instantiation to use the pkg/da/node package.
  • tools/da-debug/main_test.go
    • Updated imports and SplitID call to use the pkg/da/node package.
  • tools/local-da/local.go
    • Updated imports and blobData type to use the pkg/da/node package.
  • tools/local-da/rpc.go
    • Updated imports to use the pkg/da/node package.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for celestia-app as a Data Availability (DA) layer alongside the existing celestia-node support. It refactors the DA client logic to use a unified BlobClient interface, allowing for different underlying implementations. While the architectural direction is sound, the current implementation contains several critical regressions in the celestia-node (formerly jsonrpc) client, specifically regarding error mapping, result metadata, and placeholder implementations for proof verification. Additionally, the celestia-app client is missing blob submission functionality, which is a significant feature gap given the PR's objective.

Comment on lines 192 to 208
proofs := make([]datypes.Proof, len(ids))
for i, id := range ids {
height, commitment := SplitID(id)
if commitment == nil {
return nil, nil
}

proof, err := c.Blob.GetProof(ctx, height, ns, commitment)
if err != nil {
return nil, err
}

// Serialize proof - for now just use the proof as-is
// In a real implementation, you'd marshal the proof properly
proofs[i] = []byte{} // Placeholder
_ = proof
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The GetProofs method is currently a placeholder that returns empty byte slices for all proofs. This is a regression from the previous implementation which correctly fetched proofs from the blobAPI. This will break any logic relying on DA proof verification.

Comment on lines 214 to 248
func (c *Client) Validate(ctx context.Context, ids []datypes.ID, proofs []datypes.Proof, namespace []byte) ([]bool, error) {
if len(ids) != len(proofs) {
return nil, nil
}

if len(ids) == 0 {
return []bool{}, nil
}

ns, err := libshare.NewNamespaceFromBytes(namespace)
if err != nil {
return nil, err
}

results := make([]bool, len(ids))
for i, id := range ids {
height, commitment := SplitID(id)
if commitment == nil {
continue
}

// Deserialize proof - placeholder
var proof *Proof
_ = proofs[i] // Would unmarshal here

included, err := c.Blob.Included(ctx, height, ns, proof, commitment)
if err != nil {
results[i] = false
} else {
results[i] = included
}
}

return results, nil
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The Validate method is currently a placeholder that passes a nil proof to the Included check. This is a regression and will cause validation to fail or behave incorrectly. The previous implementation correctly unmarshaled the proof before validation.

Comment on lines +67 to +75
height, err := c.Blob.Submit(ctx, blobs, &opts)
if err != nil {
return datypes.ResultSubmit{
BaseResult: datypes.BaseResult{
Code: datypes.StatusError,
Message: "failed to submit blobs: " + err.Error(),
},
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The new implementation of Submit in the node client has lost the detailed error mapping that existed in the previous version (e.g., mapping to StatusTooBig, StatusAlreadyInMempool, etc.). It now returns a generic StatusError for all failures, which reduces the ability of the node to react appropriately to specific DA layer conditions.

Comment on lines 215 to 217
if len(ids) != len(proofs) {
return nil, nil
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

If the number of IDs and proofs do not match, the function returns nil, nil. This violates the interface expectation and may lead to nil pointer dereferences in callers. It should return an explicit error.

Suggested change
if len(ids) != len(proofs) {
return nil, nil
}
if len(ids) != len(proofs) {
return nil, fmt.Errorf("mismatched IDs and proofs length: %d != %d", len(ids), len(proofs))
}

var _ datypes.BlobClient = (*Client)(nil)

// Submit submits blobs to the DA layer via celestia-node.
func (c *Client) Submit(ctx context.Context, data [][]byte, gasPrice float64, namespace []byte, options []byte) datypes.ResultSubmit {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The gasPrice and options parameters are currently ignored in the Submit implementation. This prevents callers from specifying transaction priority or other submission-specific configurations.

Comment on lines +76 to +138
func (c *Client) Submit(ctx context.Context, data [][]byte, _ float64, namespace []byte, options []byte) datypes.ResultSubmit {
// Calculate blob size
var blobSize uint64
for _, b := range data {
blobSize += uint64(len(b))
}

// Validate namespace
ns, err := share.NewNamespaceFromBytes(namespace)
if err != nil {
return datypes.ResultSubmit{
BaseResult: datypes.BaseResult{
Code: datypes.StatusError,
Message: fmt.Sprintf("invalid namespace: %v", err),
},
}
}

// Check blob sizes
for i, raw := range data {
if uint64(len(raw)) > defaultMaxBlobSize {
return datypes.ResultSubmit{
BaseResult: datypes.BaseResult{
Code: datypes.StatusTooBig,
Message: datypes.ErrBlobSizeOverLimit.Error(),
},
}
}
// Validate blob data
if len(raw) == 0 {
return datypes.ResultSubmit{
BaseResult: datypes.BaseResult{
Code: datypes.StatusError,
Message: fmt.Sprintf("blob %d is empty", i),
},
}
}
}

// TODO: Implement actual blob submission
// This requires:
// 1. Creating a MsgPayForBlobs transaction
// 2. Signing the transaction
// 3. Broadcasting via /broadcast_tx_commit or /broadcast_tx_sync
//
// For now, return an error indicating this needs to be implemented
// with proper transaction signing infrastructure.
c.logger.Error().
Int("blob_count", len(data)).
Str("namespace", ns.String()).
Msg("Submit not implemented - requires transaction signing infrastructure")

return datypes.ResultSubmit{
BaseResult: datypes.BaseResult{
Code: datypes.StatusError,
Message: "Submit not implemented: requires transaction signing. Use celestia-node client for submission or implement signer integration",
SubmittedCount: 0,
Height: 0,
Timestamp: time.Now(),
BlobSize: blobSize,
},
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The Submit method for celestia-app is not implemented and always returns an error. While the comment acknowledges this, it means the 'support' for celestia-app is currently read-only, which might not meet the expectations for this feature.

Comment on lines +159 to +168
if strings.Contains(err.Error(), "height") && strings.Contains(err.Error(), "future") {
return datypes.ResultRetrieve{
BaseResult: datypes.BaseResult{
Code: datypes.StatusHeightFromFuture,
Message: datypes.ErrHeightFromFuture.Error(),
Height: height,
Timestamp: time.Now(),
},
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Error handling in Retrieve relies on string matching for 'height' and 'future'. This is fragile as CometBFT RPC error messages may change across versions. Consider using a more robust way to detect these specific error conditions if possible.

Comment on lines +293 to +333
func (c *Client) Get(ctx context.Context, ids []datypes.ID, namespace []byte) ([]datypes.Blob, error) {
if len(ids) == 0 {
return nil, nil
}

ns, err := share.NewNamespaceFromBytes(namespace)
if err != nil {
return nil, fmt.Errorf("invalid namespace: %w", err)
}

// Group IDs by height for efficient fetching
blobsByHeight := make(map[uint64][]datypes.ID)
for _, id := range ids {
height, _, err := datypes.SplitID(id)
if err != nil {
return nil, fmt.Errorf("invalid blob id: %w", err)
}
blobsByHeight[height] = append(blobsByHeight[height], id)
}

var result []datypes.Blob
for height, heightIDs := range blobsByHeight {
// Fetch block at height
retrieveResult := c.Retrieve(ctx, height, ns.Bytes())
if retrieveResult.Code != datypes.StatusSuccess {
continue
}

// Match retrieved blobs with requested IDs
for i, blobID := range retrieveResult.IDs {
for _, requestedID := range heightIDs {
if bytes.Equal(blobID, requestedID) && i < len(retrieveResult.Data) {
result = append(result, retrieveResult.Data[i])
break
}
}
}
}

return result, nil
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The Get implementation for celestia-app is inefficient as it calls Retrieve (which fetches the entire block) for every unique height in the requested IDs. While it groups by height, it still performs a full block fetch per height, which can be very heavy for large blocks.

@codecov
Copy link

codecov bot commented Feb 20, 2026

Codecov Report

❌ Patch coverage is 45.69640% with 347 lines in your changes missing coverage. Please review.
✅ Project coverage is 60.45%. Comparing base (ce18484) to head (af36ce9).

Files with missing lines Patch % Lines
pkg/da/node/client.go 14.11% 143 Missing and 3 partials ⚠️
pkg/da/app/client.go 69.56% 62 Missing and 8 partials ⚠️
pkg/da/node/mocks/blob_module_mock.go 0.00% 63 Missing ⚠️
pkg/da/factory/factory.go 81.45% 13 Missing and 10 partials ⚠️
pkg/da/node/mocks/header_module_mock.go 0.00% 21 Missing ⚠️
block/internal/da/client.go 0.00% 12 Missing ⚠️
block/internal/da/tracing.go 9.09% 10 Missing ⚠️
block/public.go 0.00% 2 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #3095      +/-   ##
==========================================
- Coverage   60.92%   60.45%   -0.48%     
==========================================
  Files         113      115       +2     
  Lines       11617    11901     +284     
==========================================
+ Hits         7078     7195     +117     
- Misses       3741     3913     +172     
+ Partials      798      793       -5     
Flag Coverage Δ
combined 60.45% <45.69%> (-0.48%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant