Skip to content

[DO NOT MERGE] ai moonshot#3079

Draft
alpe wants to merge 7 commits intomainfrom
alex/ai_moonshot
Draft

[DO NOT MERGE] ai moonshot#3079
alpe wants to merge 7 commits intomainfrom
alex/ai_moonshot

Conversation

@alpe
Copy link
Contributor

@alpe alpe commented Feb 17, 2026

Optimize block production throughput

Reduces per-block overhead by eliminating redundant RPCs, store I/O, and blocking operations on the sequencer hot path.

Changes

Eliminated RPCs

  • Cache previous block info (prevBlockInfo) → skip eth_getBlockByNumber RPC
  • Remove reconcileExecutionAtHeight on sequencer (always fails — block doesn't exist yet) → skip 1 store read + 1 eth RPC
  • Defer setFinalWithHeight ForkchoiceUpdated → skip 1 engine RPC per block (state carried by next block's FCU)

Eliminated store I/O

  • Cache last block header hash, data hash, and signature (lastBlockCacheEntry) → skip GetBlockData + GetSignature reads in CreateBlock
  • Skip savePendingBlock/deletePendingBlock on non-raft nodes (raft crash recovery only)
  • Make saveExecMeta async (best-effort, non-blocking)

Eliminated CPU / blocking work

  • Skip ValidateBlock on sequencer — self-produced blocks don't need self-validation
  • Drop SHA-256 hash computation in saveExecMeta (unused for decisions)
  • Cache pubKey + validatorHash after first computation (immutable)
  • Fire-and-forget P2P broadcast instead of blocking errgroup.Wait()
  • Pre-compute static values (zeroHashHex, emptyWithdrawals, emptyBlobHashes, emptyExecReqs)

Impact

Metric Before After
Engine RPCs 4 3
Eth RPCs 2 0
Sync store ops 5 1 (batch commit)
Per-block allocs 5+ 0
Blocking broadcast yes fire-and-forget

@github-actions
Copy link
Contributor

github-actions bot commented Feb 17, 2026

The latest Buf updates on your PR. Results from workflow CI / buf-check (pull_request).

BuildFormatLintBreakingUpdated (UTC)
✅ passed⏩ skipped✅ passed✅ passedFeb 18, 2026, 9:09 AM

@claude
Copy link
Contributor

claude bot commented Feb 17, 2026

Claude finished @alpe's task in 2m 3s —— View job


Code Review: Block Production Throughput Optimization

  • Gather context and understand request
  • Read changed files
  • Analyze block executor changes
  • Analyze EVM execution changes
  • Review benchmark and test changes
  • Post comprehensive review

Summary

This PR targets the sequencer hot path with a set of well-motivated optimizations. The intent is solid and several changes are straightforwardly correct. However, there are a few issues ranging from data-race bugs to correctness concerns that need to be addressed before merging.


Critical Issues

1. Data race on signerInfoCached / cachedPubKey / cachedValidatorHash (executor.go:687-713)

CreateBlock reads and writes e.signerInfoCached, e.cachedPubKey, and e.cachedValidatorHash without holding any lock. While the sequencer currently calls CreateBlock from a single goroutine, this is not enforced by the type system. More importantly, ProduceBlock holds e.lastBlockMu in a separate scope — if a future change ever calls CreateBlock concurrently (e.g., from a tracing wrapper), this is a silent race.

Either protect these three fields with lastBlockMu (they're already in the same struct), or document clearly that CreateBlock must only be called from the execution loop goroutine.

// Current (racy if called concurrently):
if e.signerInfoCached {
    pubKey = e.cachedPubKey
    ...
}
...
e.cachedPubKey = pubKey
e.signerInfoCached = true

An atomic flag (sync/atomic.Bool) for signerInfoCached with pointer-based fields, or a simple sync.Once, would be cleaner and race-safe.

2. saveExecMeta goroutine uses a potentially-cancelled context (execution.go:470, execution.go:1031)

Both goroutine launches pass the parent context:

go c.saveExecMeta(ctx, blockHeight, ...)  // line 470
go c.saveExecMeta(ctx, blockHeight, ...)  // line 1031

If the caller's context is request-scoped and gets cancelled shortly after ExecuteTxs returns (which is expected in production), the store write will fail silently. Since the docstring says "best-effort, non-blocking", the intent is that it should complete even after the caller's context is done. Use context.WithoutCancel(ctx) (Go 1.21+) or context.Background() to detach the lifecycle:

go c.saveExecMeta(context.Background(), blockHeight, ...)

Gemini also flagged this — it's a genuine correctness issue.

3. Deferred setFinalWithHeight may violate EL safety guarantees (execution.go:1013-1018)

// 3. Update forkchoice state (deferred — no RPC).
c.updateForkchoiceState(blockHash, blockHeight)

updateForkchoiceState only advances currentSafeBlockHash and currentFinalizedBlockHash if the corresponding heights are already in blockHashCache. But cacheBlockHash is called for the current block just above (line 1017), so at heights ≤ FinalizedBlockLag the finalized hash won't advance — that's fine. However, skipping the engine RPC entirely means the EL never receives a ForkchoiceUpdated message after newPayload. Until the next ExecuteTxs fires, the EL's canonical head hasn't been acknowledged by a FCU call, which is required per the Engine API spec for the block to be canonical.

This deferral relies on the assumption that the next ExecuteTxs will fire quickly and will use the updated currentHeadBlockHash. During normal block production that holds, but on crash/restart the state is lost and the EL may be in an undefined forkchoice state. If the node crashes between newPayload (line 988) and the next ForkchoiceUpdated, the block was submitted but never made canonical in the EL. Crash recovery via reconcileExecutionAtHeight was removed (see below), so this path is now unrecoverable without the FCU.


High-Priority Issues

4. Removal of reconcileExecutionAtHeight breaks crash recovery for the common case (execution.go)

reconcileExecutionAtHeight provided idempotency for the "already-promoted" case (block already exists in EL) and the "in-progress" case (payloadID exists from a previous run). These are the two most important crash-recovery paths.

The PR removes the call entirely in ExecuteTxs with the comment "always fails — block doesn't exist yet". That's true for new blocks on the sequencer — but it's not true after a crash where the EL already has the block from the previous run.

Without this call:

  • A block that was fully promoted before crash will be re-executed on restart, potentially creating a sibling block in the EL.
  • The ExecStageStarted payloadID recovery path is unreachable.

The method reconcileExecutionAtHeight still exists and is tested — it just isn't called from ExecuteTxs anymore. If removing this call is intentional, the crash recovery path needs to be re-plumbed elsewhere.

5. Async saveExecMeta for ExecStageStarted provides no crash-recovery value (execution.go:469-470)

// Save ExecMeta with payloadID for crash recovery (Stage="started") — async.
go c.saveExecMeta(ctx, blockHeight, ...)

The ExecStageStarted meta is only useful if we can read it back during crash recovery (via reconcileExecutionAtHeight). Since that call is removed (issue #4), the write is now dead code from a recovery perspective. If crash recovery is intentionally deferred to a future PR, this goroutine wastes a store write on every block.

6. P2P fire-and-forget broadcast may broadcast blocks out of order (executor.go:589-600)

julienrbrt's own edit to their review comment notes this:

"maybe it isn't preferred as we could broadcast out of order, and this will fail go-header verification."

go-header verifies that headers arrive in monotonically increasing order. If ProduceBlock produces block N+1 while the goroutine for block N's broadcast is still running, block N+1's header could arrive at peers before block N's header. This would cause validators to reject block N+1 as a height gap violation.

The errgroup.WithContext(e.ctx) context sharing is fine, but removing the Wait() makes the ordering non-deterministic. The original synchronous broadcast was there for a reason.

7. julienrbrt review comment: savePendingBlock skip is wrong for non-raft (executor.go:488-494)

From julienrbrt's review: "This is wrong, we do want to persist it every time." The PR skips savePendingBlock for non-raft nodes as an optimization, but the pending block serves as crash recovery for the executor itself — not just raft. If the node crashes after CreateBlock but before ApplyBlock + batch.Commit(), without a pending block the node will re-create the block with potentially different transactions on restart (depending on sequencer state), creating divergence.


Medium-Priority Issues

8. lastBlockCacheEntry skips signer info consistency check

The review comment from julienrbrt at executor.go:34 notes that the lastBlockCacheEntry is redundant because store caching is already done at cached_store.go. The cache adds complexity without addressing the actual bottleneck if store reads are already O(1) in-memory.

More importantly, the cache stores the signature from block N to use as header.Signature in block N+1's header (the "last signature" field). This is architecturally fine, but if the node is restarted between producing block N and block N+1, the cache is cold and falls back to the store — which is correct. However, if another node somehow produces block N (e.g., raft failover), the cache would have the wrong signature. The raft path already clears this via restart, so this is likely fine in practice.

9. blockHashCache unbounded growth for restarted non-sequential access

cacheBlockHash prunes to 10 entries using minHeight = height - 9. But if blockHeight is 0 (genesis) or the cache is accessed non-sequentially during recovery, minHeight can be 0 and old entries won't be pruned:

var minHeight uint64
if height >= maxCacheSize-1 {
    minHeight = height - (maxCacheSize - 1)
}

This is a minor issue since the cache is bounded to 10 in practice, but the branch on height >= maxCacheSize-1 silently does nothing before the first 9 blocks.

10. PayloadTransactions.MarshalJSON has a reallocation path for large transactions (execution.go:1212-1216)

if cap(buf) < start+n {
    newBuf := make([]byte, len(buf), start+n*2)
    ...
}

The pre-calculation at the top of the function should prevent this from ever triggering, but the cap(buf) < start+n branch silently reallocates instead of panic-ing with a clear bug indicator. A comment explaining why this branch is unreachable (or an assertion) would improve maintainability.

11. emptyBlobHashes type mismatch risk

emptyBlobHashes = []string{}

This is used as blobHashes []string in NewPayload. Sharing a single package-level slice across calls is safe as long as the callee doesn't mutate it (and NewPayload shouldn't). Consider using nil instead of []string{} — it's equivalent for JSON marshaling and avoids the implicit contract that the slice is immutable.


Low-Priority / Style Issues

12. validateBlock removal is commented out rather than code-commented

// ValidateBlock is only needed for blocks we didn't produce (syncer path).
// On the sequencer, we just built this block — skip self-validation.

This is a meaningful behavioral change worth a prominent comment, but the ValidateBlock method still exists on the Executor. Consider adding a //nolint:unused annotation or making the skip conditional on a config flag so it remains testable.

13. Benchmark OTLP collector doesn't handle graceful shutdown races

go func() { _ = c.server.Serve(listener) }()

If close() is called during an active request (e.g., from defer collector.close() in the benchmark), Serve returns but in-flight span uploads may be dropped. The time.Sleep(2 * time.Second) before StopTimer mitigates this but is fragile. A proper drain (e.g., WaitGroup on active requests) would be more reliable.

14. zeroHashHex name is slightly misleading

common.Hash{}.Hex() returns "0x0000...0000" — this is the zero hash hex string. The name is clear, but the variable is used as parentBeaconBlockRoot which the Engine API expects to be a 32-byte hash. Since it's just a string field here, this is fine, but a comment explaining why we use zero would help readers of the Engine API spec.


Positive Observations

  • prevBlockInfo cache in EngineClient is clean and well-structured. The lock discipline is correct (acquire before read, release before use).
  • PayloadTransactions.MarshalJSON custom encoder is a nice optimization that avoids the intermediate []string allocation path. The size pre-calculation is correct.
  • cachedChainID is a simple, correct micro-optimization.
  • testing.TB refactor across test helpers is the right direction for supporting both *testing.T and *testing.B.
  • updateForkchoiceState design is sound; the concern is the missing RPC, not the state tracking logic itself.
  • The benchmark infrastructure (in-process OTLP collector) is well-designed for measuring internal span latencies.

Suggested Priority for Fixes

  1. P2P out-of-order broadcast — could cause peer sync failures in production
  2. saveExecMeta context cancellation — silent data loss on every block
  3. signerInfoCached race — run go test -race to confirm
  4. reconcileExecutionAtHeight removal — needs a clear plan for crash recovery
  5. savePendingBlock skip — confirm with julienrbrt whether this changes crash-recovery semantics

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @alpe, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly optimizes the block production throughput for sequencers by systematically identifying and removing bottlenecks related to inter-process communication (RPCs), disk I/O, and blocking CPU-intensive tasks. The changes aim to streamline the block creation and propagation process, leading to a more efficient and responsive system. A new benchmark was also added to measure and analyze these performance improvements, ensuring the changes are quantifiable and maintainable.

Highlights

  • RPC and Store I/O Reduction: Eliminated several redundant RPC calls and synchronous store I/O operations on the sequencer hot path by introducing caching mechanisms for previous block information and last block data, and by making saveExecMeta asynchronous.
  • CPU and Blocking Work Optimization: Reduced CPU overhead and blocking operations by skipping self-validation for self-produced blocks, removing unused SHA-256 hash computations, caching immutable signer information, and converting P2P block broadcasts to a fire-and-forget asynchronous model.
  • EVM Execution Enhancements: Improved EVM execution efficiency by caching previous block information to avoid eth_getBlockByNumber RPCs, deferring ForkchoiceUpdated RPCs, and pre-computing static values for common EVM attributes.
  • Benchmarking and Tracing: Introduced a new end-to-end benchmark for EVM contract roundtrip latency, integrated with OpenTelemetry tracing to provide detailed performance breakdowns of ev-node components.
  • Test Infrastructure Refinement: Updated various e2e test helper functions to use testing.TB interface, allowing them to be used in both *testing.T (tests) and *testing.B (benchmarks), improving test utility and flexibility.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • block/internal/executing/executor.go
    • Added lastBlockCacheEntry struct to cache previous block header hash, data hash, and signature.
    • Introduced lastBlockMu, lastBlockInfo, cachedPubKey, cachedValidatorHash, and signerInfoCached fields to the Executor struct for caching.
    • Modified ProduceBlock to conditionally save pending blocks only for raft nodes, reducing store write overhead for non-raft nodes.
    • Removed the ValidateBlock call for self-produced blocks in ProduceBlock to eliminate redundant self-validation.
    • Updated ProduceBlock to cache the newly produced block's information for the next CreateBlock call and made P2P broadcasts asynchronous.
    • Refactored CreateBlock to utilize the lastBlockInfo cache for retrieving previous block data and signature, avoiding store reads.
    • Implemented caching for pubKey and validatorHash in CreateBlock to prevent re-computation of immutable signer information.
  • execution/evm/execution.go
    • Removed the crypto/sha256 import as its usage was eliminated.
    • Introduced pre-computed constants (zeroHashHex, emptyWithdrawals, emptyBlobHashes, emptyExecReqs) to reduce per-block allocations.
    • Added prevBlockInfo struct and prevBlock field to EngineClient to cache previous block details.
    • Modified ExecuteTxs to leverage the prevBlock cache for previous block information, eliminating an eth_getBlockByNumber RPC.
    • Removed the reconcileExecutionAtHeight call from ExecuteTxs as it was always failing for blocks not yet existing.
    • Updated ExecuteTxs to use pre-computed constants for withdrawals and parentBeaconBlockRoot in payload attributes.
    • Changed saveExecMeta calls in ExecuteTxs and processPayload to be asynchronous, making them non-blocking.
    • Added updateForkchoiceState to update in-memory forkchoice state without an immediate engine RPC call.
    • Modified processPayload to use updateForkchoiceState and cache prevBlock information, deferring ForkchoiceUpdated RPCs.
    • Removed SHA-256 hash computation from saveExecMeta as it was unused for decision-making.
  • execution/evm/test/test_helpers.go
    • Updated getTestScopedDockerSetup, SetupTestRethNode, and waitForRethContainer functions to accept testing.TB interface.
    • Added zap logger integration to SetupTestRethNode for improved test logging.
  • execution/evm/test_helpers.go
    • Updated GetRandomTransaction to accept testing.TB interface.
  • test/e2e/evm_contract_bench_test.go
    • Added a new benchmark file evm_contract_bench_test.go to measure EVM contract roundtrip latency.
    • Implemented an in-process OTLP/HTTP collector to gather OpenTelemetry trace spans during benchmarks.
    • Included logic to print a detailed trace breakdown report, aggregating span timings by operation.
  • test/e2e/evm_contract_e2e_test.go
    • Modified setupTestSequencer and deployContract to accept testing.TB interface.
    • Added extraArgs parameter to setupTestSequencer to allow passing additional arguments to the sequencer node setup.
  • test/e2e/evm_test_common.go
    • Updated mustGetAvailablePort, createPassphraseFile, createJWTSecretFile, getNodeP2PAddress, setupSequencerNode, setupSequencerNodeLazy, setupFullNode, submitTransactionAndGetBlockNumber, setupCommonEVMTest, checkBlockInfoAt, setupSequencerOnlyTest, restartDAAndSequencer, restartDAAndSequencerLazy, restartSequencerNode, and verifyNoBlockProduction functions to accept testing.TB interface.
    • Added extraArgs parameter to setupSequencerNode and setupSequencerOnlyTest for more flexible test configurations.
  • test/e2e/go.mod
    • Added go.opentelemetry.io/otel and go.opentelemetry.io/otel/sdk as direct dependencies.
  • test/e2e/sut_helper.go
    • Updated SystemUnderTest struct and its methods (NewSystemUnderTest, AwaitNodeUp, AwaitNodeLive, AwaitNBlocks, MustCopyFile, NodeID) to accept testing.TB interface.
Ignored Files
  • Ignored by pattern: .github/workflows/** (1)
    • .github/workflows/benchmark.yml
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces significant performance optimizations by implementing caching mechanisms and asynchronous operations in the block production and EVM execution paths. In block/internal/executing/executor.go, a lastBlockCacheEntry and associated fields were added to cache the previous block's header, data, and signature, reducing redundant store reads during CreateBlock. Signer information is also now cached. The ProduceBlock function was refactored to conditionally save/delete pending blocks only for Raft-enabled setups and to skip self-validation for sequencer-produced blocks. P2P block broadcasting was made asynchronous to avoid blocking block production. In execution/evm/execution.go, pre-computed constants were introduced to reduce allocations, and a prevBlockInfo cache was added to EngineClient to avoid eth_getBlockByNumber RPC calls during ExecuteTxs. The saveExecMeta calls were made asynchronous, and the TxHash computation was removed from saveExecMeta. Additionally, updateForkchoiceState was added to update in-memory state without an immediate RPC call. Test helper functions across execution/evm/test/test_helpers.go, execution/evm/test_helpers.go, test/e2e/evm_contract_e2e_test.go, and test/e2e/evm_test_common.go were updated to use testing.TB instead of *testing.T to support benchmarking, and a new benchmark file test/e2e/evm_contract_bench_test.go was added to measure contract roundtrip latency with OpenTelemetry tracing. Review comments highlighted that the asynchronous saveExecMeta calls should use context.Background() instead of the parent's context to ensure they complete even if the parent's context is canceled, as they are intended to be 'fire-and-forget' operations.

// This allows resuming the payload build if we crash before completing
c.saveExecMeta(ctx, blockHeight, timestamp.Unix(), newPayloadID[:], nil, nil, txs, ExecStageStarted)
// Save ExecMeta with payloadID for crash recovery (Stage="started") — async.
go c.saveExecMeta(ctx, blockHeight, timestamp.Unix(), newPayloadID[:], nil, nil, txs, ExecStageStarted)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Calling saveExecMeta in a new goroutine with the parent's context can be problematic. If the context passed to ExecuteTxs is request-scoped, it might be canceled after ExecuteTxs returns, causing saveExecMeta to fail unexpectedly. For a true 'fire-and-forget' operation that should outlive the parent function, it's safer to use a new, detached context like context.Background().

Suggested change
go c.saveExecMeta(ctx, blockHeight, timestamp.Unix(), newPayloadID[:], nil, nil, txs, ExecStageStarted)
go c.saveExecMeta(context.Background(), blockHeight, timestamp.Unix(), newPayloadID[:], nil, nil, txs, ExecStageStarted)

// 4. Save ExecMeta (Promoted)
c.saveExecMeta(ctx, blockHeight, blockTimestamp, payloadID[:], blockHash[:], payloadResult.ExecutionPayload.StateRoot.Bytes(), txs, ExecStagePromoted)
// 5. Save ExecMeta (Promoted) — async, best-effort.
go c.saveExecMeta(ctx, blockHeight, blockTimestamp, payloadID[:], blockHash[:], payloadResult.ExecutionPayload.StateRoot.Bytes(), txs, ExecStagePromoted)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Similar to the other async call to saveExecMeta, using the parent's context in this goroutine can lead to issues if the context is short-lived. For a 'fire-and-forget' operation, it's safer to use a background context to ensure it completes even if the parent function's context is canceled.

Suggested change
go c.saveExecMeta(ctx, blockHeight, blockTimestamp, payloadID[:], blockHash[:], payloadResult.ExecutionPayload.StateRoot.Bytes(), txs, ExecStagePromoted)
go c.saveExecMeta(context.Background(), blockHeight, blockTimestamp, payloadID[:], blockHash[:], payloadResult.ExecutionPayload.StateRoot.Bytes(), txs, ExecStagePromoted)

@codecov
Copy link

codecov bot commented Feb 17, 2026

Codecov Report

❌ Patch coverage is 71.92982% with 16 lines in your changes missing coverage. Please review.
✅ Project coverage is 61.19%. Comparing base (d4394f6) to head (f247a2c).
⚠️ Report is 1 commits behind head on main.

Files with missing lines Patch % Lines
block/internal/executing/executor.go 71.92% 10 Missing and 6 partials ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #3079      +/-   ##
==========================================
+ Coverage   61.11%   61.19%   +0.08%     
==========================================
  Files         113      113              
  Lines       11444    11468      +24     
==========================================
+ Hits         6994     7018      +24     
- Misses       3661     3666       +5     
+ Partials      789      784       -5     
Flag Coverage Δ
combined 61.19% <71.92%> (+0.08%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

}
if err := e.savePendingBlock(ctx, header, data); err != nil {
return fmt.Errorf("failed to save block data: %w", err)
// Only persist pending block for raft crash recovery — skip for non-raft
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is wrong, we do want to persist it every time.

e.lastBlockMu.Unlock()

// P2P broadcast is fire-and-forget — doesn't block next block production.
go func() {
Copy link
Member

@julienrbrt julienrbrt Feb 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This one makes sense given we just alog anyway

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

EDIT: maybe it isn't preferred as we could broadcast out of order, and this will fail go-header verification.


// lastBlockCacheEntry caches the last produced block's header hash, data hash,
// and signature to avoid store reads in CreateBlock.
type lastBlockCacheEntry struct {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be removed, we are caching at store level, so this isn't an issue (see cached_store.go)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants