Skip to content

experiment: busybox + docker layer caching + stable ldflags#19830

Draft
davdhacs wants to merge 33 commits intoROX-33958/resue-componentsfrom
davdhacs/experiment-busybox-cache
Draft

experiment: busybox + docker layer caching + stable ldflags#19830
davdhacs wants to merge 33 commits intoROX-33958/resue-componentsfrom
davdhacs/experiment-busybox-cache

Conversation

@davdhacs
Copy link
Copy Markdown
Contributor

@davdhacs davdhacs commented Apr 4, 2026

Description

Experiment — combine Tomecz's busybox consolidation (#19819) with Docker layer caching optimizations to measure the minimum possible build+push time.

Optimizations combined

Optimization Source Impact
BusyBox binary consolidation #19819 8 binaries → 1 (1.93GB → 1.22GB)
Docker driver (no buildkit boot) #19806 Skip ~40s container startup
COPY --link (independent layers) #19806 Unchanged layers skip push
Stable build ldflags #19829 Byte-identical binary if source unchanged
Inline cache (image as cache source) #19806 Previous build = cache
push:true + provenance:false #19806 Direct registry push
base-with-packages stage #19806 Cacheable package installs

Expected results

With one binary + stable ldflags + COPY --link:

  • If Go source unchanged: binary identical → layer cached → zero push
  • If Go source changed: only one binary layer (~300MB) to rebuild + push

Target: build+push main image under 30s warm.

🤖 Generated with Claude Code

davdhacs and others added 27 commits April 2, 2026 16:47
Enable GHA buildx cache for main, roxctl, and operator image builds.
Docker layers (base image pulls, package installs) are cached across
CI runs, avoiding redundant microdnf upgrade/install on every build.

Cache is opt-in via DOCKER_BUILDX_CACHE env var to avoid affecting
local builds. Scoped per image and architecture to prevent collisions.

Expected savings: ~60-80s off the 115s "Build main images" step on
warm runs (package install layers cached).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Move OS package installation (microdnf upgrade, postgres RPMs,
util-linux) into a separate 'base-with-packages' stage. The final
stage uses FROM base-with-packages and only COPYs binaries.

Before: COPY binaries → RUN microdnf (rebuilds packages every commit)
After:  base-with-packages stage (cached) → COPY binaries (fast)

With Docker buildx GHA cache, the package install layer (~60s) is
cached across CI runs. Only the binary COPY steps rebuild per commit.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The RUN step calls save-dir-contents which is in static-bin/.
Copy just the helper scripts needed for package installation.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace make docker-build-main-image with docker/build-push-action@v6
which handles GHA buildx cache natively. The action manages cache
tokens and builder configuration that our docker-build.sh wrapper
was missing.

The base-with-packages Dockerfile stage (package installs) should
now cache across CI runs, skipping microdnf upgrade/install when
only binaries change.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Each COPY --link creates an independent overlay layer. Changing one
binary (e.g. bin/central) doesn't invalidate other COPY layers
(ui, static-data, etc). Combined with GHA buildx cache, this means
only the changed binaries need to be re-copied on warm builds.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Use push: true instead of load: true on docker/build-push-action.
This pushes layers directly to the registry from buildkit, avoiding
the slow --load export to local docker (~90s overhead even with all
layers cached).

The main image is now built and pushed in one step. roxctl and
central-db still use the separate push step.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Container env vars removed by Tomecz's PR aren't available as
env vars anymore. Use secrets.* directly in docker/login-action.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The push-main-manifests step expects per-arch tags (e.g. main:tag-amd64)
to create multi-arch manifest lists.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
docker/login-action with registry: quay.io/org doesn't work for
quay.io. Use the existing registry_rw_login helper which handles
quay.io authentication correctly.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…r stackrox-io

docker/login-action can only authenticate to one quay.io org at a time.
Push main to rhacs-eng via build-push-action (fast, cached layers).
Use existing push_main_image_set for stackrox-io and other images
(handles multi-org login correctly by re-authenticating per org).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
build-push-action pushes directly to registry without loading locally.
The existing push_main_image_set expects local images. Instead:
- Push roxctl/central-db from local docker (built by make)
- Copy main from rhacs-eng to stackrox-io via skopeo

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
push:true creates manifest lists instead of plain images, breaking
the multi-arch manifest creation step. Revert to load:true which
loads into local docker and uses the existing push_main_image_set
pipeline unchanged.

GHA layer cache still works with load:true (17 cached layers confirmed).
Build step: 105s warm vs 110s baseline (5% faster). The --load export
overhead limits the savings but the Dockerfile restructuring and
COPY --link provide the foundation for future improvements.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Use push:true + provenance:false to push main image directly from
buildx to the registry. provenance:false produces a plain image
manifest (not a manifest list), compatible with the downstream
push-main-manifests job that creates multi-arch manifest lists.

Login to both quay.io orgs before the build step so buildx can push
to both registries. roxctl and central-db still use docker push
(built locally by make).

Expected: Build+push main image ~55s (vs 105s with load:true).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…g copy

buildx's docker-container driver doesn't share host docker credentials.
Use docker/login-action (which injects creds into the buildx builder)
for rhacs-eng push. Copy main to stackrox-io via skopeo (lightweight,
blobs shared on quay.io).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
docker login can only hold one quay.io credential at a time. Use
skopeo --src-creds and --dest-creds to authenticate to both orgs
simultaneously for the rhacs-eng → stackrox-io copy.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Use type=registry cache backed by GHCR instead of type=gha.
Benefits:
- No 10GB size limit (GHA cache is shared across all workflows)
- Buildx pulls only needed layers (content-addressable), not full blob
- Faster restore for multi-stage builds with many cached layers

Cache images stored at ghcr.io/stackrox/stackrox/cache/main-{arch}.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Switch from docker-container to docker driver for buildx. The docker
driver uses Docker Engine's built-in buildkit — no separate container
to boot.

No cache for this run (baseline measurement). Will add inline cache
once driver change is validated.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Docker driver (68s cold, no cache) is already faster than
docker-container + GHA cache (87s). Add inline cache on top:
- cache-to: type=inline embeds cache metadata in the pushed image
- cache-from pulls the previous build (cache-{arch} tag) as source
- With COPY --link, only changed layers need rebuilding

Push a stable cache-{arch} tag on every build so the next build
has a cache reference.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Set BUILD_TAG=0.0.0 and SHORTCOMMIT=0000000 for Go binary compilation
(pre-build-go-binaries and pre-build-cli). This produces byte-identical
binaries across commits that don't change Go source code.

Combined with COPY --link, unchanged binaries = cached Docker layers
= zero push on warm builds. The image tag (per-commit) still identifies
the build; only the embedded binary version is stable.

This is the key to sub-30s build+push: if Go source didn't change,
the binary didn't change, the layer didn't change, nothing to push.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Merge Tomecz's busybox binary consolidation (1.93GB → 1.22GB) with:
- Docker driver (no buildkit container boot)
- COPY --link (independent layers)
- Inline cache (previous image as cache source)
- Stable build ldflags (byte-identical binaries)
- push:true + provenance:false (direct registry push)
- base-with-packages stage (cacheable package installs)

With one binary instead of 8, the COPY --link layer for the central
binary is the only thing that changes between builds. Everything else
is cached.

Target: build+push under 30s for unchanged Go source.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@openshift-ci
Copy link
Copy Markdown

openshift-ci bot commented Apr 4, 2026

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 6 issues, and left some high level feedback:

  • Hard-coding BUILD_TAG and SHORTCOMMIT to 0.0.0 / 0000000 in the workflows changes all image tagging semantics for every run; consider gating this behind an explicit experimental flag or matrix entry so normal CI still exercises real versioning metadata.
  • The new workflow file operator/.github/workflows/test-chart.yml will never execute because GitHub only loads workflows from .github/workflows; if you intend this to run in CI, move it to the top-level workflows directory or otherwise mark it as an example/test-only file.
  • Several experimental/analysis artifacts (e.g. selfhosted.sh, .claude/*, CSV result files) are now in the main tree; consider moving these under a clearly ignored/experimental path or excluding them from the main repo to avoid confusion and accidental usage in non-experimental environments.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- Hard-coding `BUILD_TAG` and `SHORTCOMMIT` to `0.0.0` / `0000000` in the workflows changes all image tagging semantics for every run; consider gating this behind an explicit experimental flag or matrix entry so normal CI still exercises real versioning metadata.
- The new workflow file `operator/.github/workflows/test-chart.yml` will never execute because GitHub only loads workflows from `.github/workflows`; if you intend this to run in CI, move it to the top-level workflows directory or otherwise mark it as an example/test-only file.
- Several experimental/analysis artifacts (e.g. `selfhosted.sh`, `.claude/*`, CSV result files) are now in the main tree; consider moving these under a clearly ignored/experimental path or excluding them from the main repo to avoid confusion and accidental usage in non-experimental environments.

## Individual Comments

### Comment 1
<location path=".github/workflows/build.yaml" line_range="605-610" />
<code_context>
-
-      - name: Push images
-        # Skip for external contributions.
+      - name: Push remaining images
+        # Main image already pushed to rhacs-eng by build-push-action.
+        # Push roxctl/central-db and copy main to stackrox-io.
</code_context>
<issue_to_address>
**suggestion (performance):** Avoid repeated registry logins inside the image-push loop

Since the same credentials and registries are used for all images, `registry_rw_login` only needs to run once per registry rather than on every iteration of the `for image in roxctl central-db` loop. Moving the login calls outside the loop would avoid redundant auth requests and slightly speed up and simplify the step.

Suggested implementation:

```
      - name: Push remaining images
        # Main image already pushed to rhacs-eng by build-push-action.
        # Push roxctl/central-db and copy main to stackrox-io.
        if: |
          github.event_name == 'push' || !github.event.pull_request.head.repo.fork
        env:
          QUAY_STACKROX_IO_RW_PASSWORD: ${{ secrets.QUAY_STACKROX_IO_RW_PASSWORD }}
        run: |
            source ./scripts/ci/lib.sh

            # Perform registry logins once before pushing any images to avoid
            # repeated logins inside the image-push loop.
            registry_rw_login "quay.io/rhacs-eng" "${QUAY_RHACS_ENG_RW_PASSWORD}"
            registry_rw_login "quay.io/stackrox-io" "${QUAY_STACKROX_IO_RW_PASSWORD}"

```

To fully implement the suggestion (avoid repeated logins inside the image-push loop), you should also:
1. In the remainder of this `run:` script block (not shown in the snippet), remove any `registry_rw_login` calls that are currently executed inside the `for image in roxctl central-db` loop and rely on the logins done at the top instead.
2. Ensure `QUAY_RHACS_ENG_RW_PASSWORD` is defined in the `env:` section for this step (or inherited from a higher scope) if it is not already available, e.g.:
   ```yaml
   env:
     QUAY_RHACS_ENG_RW_PASSWORD: ${{ secrets.QUAY_RHACS_ENG_RW_PASSWORD }}
     QUAY_STACKROX_IO_RW_PASSWORD: ${{ secrets.QUAY_STACKROX_IO_RW_PASSWORD }}
   ```
3. Verify that the push/copy commands in the existing loop do not implicitly re-login, so that the new single-login behavior is effective.
</issue_to_address>

### Comment 2
<location path="image/rhel/Dockerfile" line_range="31-35" />
<code_context>
+# independently of binary changes. Package installs rarely change, but
+# binaries change every commit — this ordering avoids rebuilding packages
+# when only binaries change.
+FROM ${BASE_REGISTRY}/${BASE_IMAGE}:${BASE_TAG} AS base-with-packages

 COPY signatures/RPM-GPG-KEY-CentOS-Official /
-COPY static-bin /stackrox/
-
+COPY static-bin/save-dir-contents /stackrox/save-dir-contents
+COPY static-bin/restore-all-dir-contents /stackrox/restore-all-dir-contents
+ENV PATH="/stackrox:$PATH"
 COPY --from=downloads /output/rpms/ /tmp/
</code_context>
<issue_to_address>
**suggestion:** The helper binaries copied into the base-with-packages stage appear to be unused

In this stage you copy `save-dir-contents` and `restore-all-dir-contents` into `/stackrox` and update `PATH`, but nothing in `base-with-packages` appears to invoke them. The final stage’s `COPY --link static-bin /stackrox/` will bring them in anyway. If they’re not needed during package installation, consider removing this `COPY` and `PATH` change here to avoid redundant copies and keep the Dockerfile simpler.

```suggestion
COPY signatures/RPM-GPG-KEY-CentOS-Official /
COPY --from=downloads /output/rpms/ /tmp/
```
</issue_to_address>

### Comment 3
<location path="scripts/ci/shard-packages.sh" line_range="15-16" />
<code_context>
+    exit 1
+fi
+
+SHARD_INDEX=$1
+TOTAL_SHARDS=$2
+shift 2
+
</code_context>
<issue_to_address>
**suggestion:** Validate shard index and total shard count to avoid surprising empty shards

Currently we only check the argument count, but not that `SHARD_INDEX` is in `[0, TOTAL_SHARDS)` or that `TOTAL_SHARDS` is > 0. If `SHARD_INDEX >= TOTAL_SHARDS` or `TOTAL_SHARDS <= 0`, this shard will silently emit no packages. Please add numeric validation and a range check early, and exit with a clear error message on invalid inputs.
</issue_to_address>

### Comment 4
<location path="selfhosted.sh" line_range="3-12" />
<code_context>
+#!/usr/bin/env bash
+
+RUNNER_TOKEN=$(gh api repos/davdhacs/stackrox-reduced/actions/runners/registration-token -X POST --jq '.token')
+echo $RUNNER_TOKEN
+
+docker run -it --rm \
+  --platform linux/amd64 \
+  -e DISABLE_AUTO_UPDATE=true \
+    -e RUNNER_NAME=colima-runner \
+    -e RUNNER_WORKDIR=/tmp/runner-work \
+    -e RUNNER_REPOSITORY_URL=https://github.com/davdhacs/stackrox-reduced \
+    -e REPO_URL=https://github.com/davdhacs/stackrox-reduced \
+    -e RUNNER_TOKEN="$RUNNER_TOKEN" \
+    -e RUNNER_REUSE=true \
+    -e LABELS=self-hosted \
+    myoung34/github-runner:latest
</code_context>
<issue_to_address>
**suggestion (bug_risk):** Self-hosted runner helper script mixes repo-specific details and lacks basic failure handling

This script is hardcoded to `davdhacs/stackrox-reduced` and assumes `gh` is present, configured, and always succeeds. If `gh api` fails or returns an empty token, the container still runs with an empty `RUNNER_TOKEN`, making failures hard to diagnose. Either parameterize the repo URL and bail out when `RUNNER_TOKEN` is empty, or move this into a clearly personal/local tooling path so others don’t depend on an environment-specific script.
</issue_to_address>

### Comment 5
<location path=".claude/mtime-bench/main_test.go" line_range="63-65" />
<code_context>
+			files[i] = f
+		}
+
+		b.Run(ts.name, func(b *testing.B) {
+			b.ResetTimer()
+			for b.Loop() {
+				h := sha256.New()
+				for _, f := range files {
</code_context>
<issue_to_address>
**issue (bug_risk):** Benchmarks use `b.Loop()` which does not exist on `testing.B` and will not compile

These benchmarks use `for b.Loop()` but `testing.B` does not define `Loop`, so this file will not compile and the benchmarks will never run. Please switch to the standard `for i := 0; i < b.N; i++ { ... }` loop (or an equivalent helper) in each benchmark so they run correctly under `go test -bench`.
</issue_to_address>

### Comment 6
<location path=".claude/mtime-bench/main_test.go" line_range="55-59" />
<code_context>
+
+	for _, ts := range timestamps {
+		files := make([]string, 50)
+		for i := range files {
+			f := filepath.Join(tmpDir, fmt.Sprintf("%s_%03d.go", ts.name, i))
+			content := fmt.Sprintf("package foo\nvar x%d = %d\n", i, i)
+			os.WriteFile(f, []byte(content), 0644)
+			os.Chtimes(f, ts.goTime, ts.goTime)
+			files[i] = f
+		}
</code_context>
<issue_to_address>
**suggestion (bug_risk):** Benchmark setup ignores I/O errors, which can make results misleading or hide failures

Within the benchmark setup (e.g., `BenchmarkStatAndHash`, `BenchmarkStatOnly`), the `os.WriteFile` and `os.Chtimes` calls ignore returned errors. If the temp dir isn’t usable (e.g., read-only FS, missing directory), the benchmarks will run with an invalid setup and produce misleading results. Please check these errors and fail the benchmark via `b.Fatalf` during setup when the environment isn’t as expected.

Suggested implementation:

```golang
		for i := range files {
			f := filepath.Join(tmpDir, fmt.Sprintf("%s_%03d.go", ts.name, i))
			content := fmt.Sprintf("package foo\nvar x%d = %d\n", i, i)
			if err := os.WriteFile(f, []byte(content), 0o644); err != nil {
				b.Fatalf("BenchmarkStatAndHash: failed to write temp file %q: %v", f, err)
			}
			if err := os.Chtimes(f, ts.goTime, ts.goTime); err != nil {
				b.Fatalf("BenchmarkStatAndHash: failed to change times for temp file %q: %v", f, err)
			}
			files[i] = f
		}

```

Apply the same pattern to any other benchmarks in this file that create temp files (e.g., `BenchmarkStatOnly`), wrapping their `os.WriteFile` and `os.Chtimes` calls in error checks and failing the benchmark with `b.Fatalf` on error. This keeps all benchmark setups consistent and ensures they abort on unusable environments.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment on lines +605 to 610
- name: Push remaining images
# Main image already pushed to rhacs-eng by build-push-action.
# Push roxctl/central-db and copy main to stackrox-io.
if: |
github.event_name == 'push' || !github.event.pull_request.head.repo.fork
env:
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (performance): Avoid repeated registry logins inside the image-push loop

Since the same credentials and registries are used for all images, registry_rw_login only needs to run once per registry rather than on every iteration of the for image in roxctl central-db loop. Moving the login calls outside the loop would avoid redundant auth requests and slightly speed up and simplify the step.

Suggested implementation:

      - name: Push remaining images
        # Main image already pushed to rhacs-eng by build-push-action.
        # Push roxctl/central-db and copy main to stackrox-io.
        if: |
          github.event_name == 'push' || !github.event.pull_request.head.repo.fork
        env:
          QUAY_STACKROX_IO_RW_PASSWORD: ${{ secrets.QUAY_STACKROX_IO_RW_PASSWORD }}
        run: |
            source ./scripts/ci/lib.sh

            # Perform registry logins once before pushing any images to avoid
            # repeated logins inside the image-push loop.
            registry_rw_login "quay.io/rhacs-eng" "${QUAY_RHACS_ENG_RW_PASSWORD}"
            registry_rw_login "quay.io/stackrox-io" "${QUAY_STACKROX_IO_RW_PASSWORD}"

To fully implement the suggestion (avoid repeated logins inside the image-push loop), you should also:

  1. In the remainder of this run: script block (not shown in the snippet), remove any registry_rw_login calls that are currently executed inside the for image in roxctl central-db loop and rely on the logins done at the top instead.
  2. Ensure QUAY_RHACS_ENG_RW_PASSWORD is defined in the env: section for this step (or inherited from a higher scope) if it is not already available, e.g.:
    env:
      QUAY_RHACS_ENG_RW_PASSWORD: ${{ secrets.QUAY_RHACS_ENG_RW_PASSWORD }}
      QUAY_STACKROX_IO_RW_PASSWORD: ${{ secrets.QUAY_STACKROX_IO_RW_PASSWORD }}
  3. Verify that the push/copy commands in the existing loop do not implicitly re-login, so that the new single-login behavior is effective.

Comment on lines 31 to 35
COPY signatures/RPM-GPG-KEY-CentOS-Official /
COPY static-bin /stackrox/

COPY static-bin/save-dir-contents /stackrox/save-dir-contents
COPY static-bin/restore-all-dir-contents /stackrox/restore-all-dir-contents
ENV PATH="/stackrox:$PATH"
COPY --from=downloads /output/rpms/ /tmp/
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: The helper binaries copied into the base-with-packages stage appear to be unused

In this stage you copy save-dir-contents and restore-all-dir-contents into /stackrox and update PATH, but nothing in base-with-packages appears to invoke them. The final stage’s COPY --link static-bin /stackrox/ will bring them in anyway. If they’re not needed during package installation, consider removing this COPY and PATH change here to avoid redundant copies and keep the Dockerfile simpler.

Suggested change
COPY signatures/RPM-GPG-KEY-CentOS-Official /
COPY static-bin /stackrox/
COPY static-bin/save-dir-contents /stackrox/save-dir-contents
COPY static-bin/restore-all-dir-contents /stackrox/restore-all-dir-contents
ENV PATH="/stackrox:$PATH"
COPY --from=downloads /output/rpms/ /tmp/
COPY signatures/RPM-GPG-KEY-CentOS-Official /
COPY --from=downloads /output/rpms/ /tmp/

Comment on lines +15 to +16
SHARD_INDEX=$1
TOTAL_SHARDS=$2
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: Validate shard index and total shard count to avoid surprising empty shards

Currently we only check the argument count, but not that SHARD_INDEX is in [0, TOTAL_SHARDS) or that TOTAL_SHARDS is > 0. If SHARD_INDEX >= TOTAL_SHARDS or TOTAL_SHARDS <= 0, this shard will silently emit no packages. Please add numeric validation and a range check early, and exit with a clear error message on invalid inputs.

Comment on lines +3 to +12
RUNNER_TOKEN=$(gh api repos/davdhacs/stackrox-reduced/actions/runners/registration-token -X POST --jq '.token')
echo $RUNNER_TOKEN

docker run -it --rm \
--platform linux/amd64 \
-e DISABLE_AUTO_UPDATE=true \
-e RUNNER_NAME=colima-runner \
-e RUNNER_WORKDIR=/tmp/runner-work \
-e RUNNER_REPOSITORY_URL=https://github.com/davdhacs/stackrox-reduced \
-e REPO_URL=https://github.com/davdhacs/stackrox-reduced \
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (bug_risk): Self-hosted runner helper script mixes repo-specific details and lacks basic failure handling

This script is hardcoded to davdhacs/stackrox-reduced and assumes gh is present, configured, and always succeeds. If gh api fails or returns an empty token, the container still runs with an empty RUNNER_TOKEN, making failures hard to diagnose. Either parameterize the repo URL and bail out when RUNNER_TOKEN is empty, or move this into a clearly personal/local tooling path so others don’t depend on an environment-specific script.

Comment on lines +63 to +65
b.Run(ts.name, func(b *testing.B) {
b.ResetTimer()
for b.Loop() {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (bug_risk): Benchmarks use b.Loop() which does not exist on testing.B and will not compile

These benchmarks use for b.Loop() but testing.B does not define Loop, so this file will not compile and the benchmarks will never run. Please switch to the standard for i := 0; i < b.N; i++ { ... } loop (or an equivalent helper) in each benchmark so they run correctly under go test -bench.

Comment on lines +55 to +59
for i := range files {
f := filepath.Join(tmpDir, fmt.Sprintf("%s_%03d.go", ts.name, i))
content := fmt.Sprintf("package foo\nvar x%d = %d\n", i, i)
os.WriteFile(f, []byte(content), 0644)
os.Chtimes(f, ts.goTime, ts.goTime)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (bug_risk): Benchmark setup ignores I/O errors, which can make results misleading or hide failures

Within the benchmark setup (e.g., BenchmarkStatAndHash, BenchmarkStatOnly), the os.WriteFile and os.Chtimes calls ignore returned errors. If the temp dir isn’t usable (e.g., read-only FS, missing directory), the benchmarks will run with an invalid setup and produce misleading results. Please check these errors and fail the benchmark via b.Fatalf during setup when the environment isn’t as expected.

Suggested implementation:

		for i := range files {
			f := filepath.Join(tmpDir, fmt.Sprintf("%s_%03d.go", ts.name, i))
			content := fmt.Sprintf("package foo\nvar x%d = %d\n", i, i)
			if err := os.WriteFile(f, []byte(content), 0o644); err != nil {
				b.Fatalf("BenchmarkStatAndHash: failed to write temp file %q: %v", f, err)
			}
			if err := os.Chtimes(f, ts.goTime, ts.goTime); err != nil {
				b.Fatalf("BenchmarkStatAndHash: failed to change times for temp file %q: %v", f, err)
			}
			files[i] = f
		}

Apply the same pattern to any other benchmarks in this file that create temp files (e.g., BenchmarkStatOnly), wrapping their os.WriteFile and os.Chtimes calls in error checks and failing the benchmark with b.Fatalf on error. This keeps all benchmark setups consistent and ensures they abort on unusable environments.

Busybox consolidation produces one central binary. Individual binaries
(migrator, compliance, etc.) are symlinks created in the Dockerfile.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 4, 2026

🚀 Build Images Ready

Images are ready for commit f2ef1a0. To use with deploy scripts:

export MAIN_IMAGE_TAG=4.11.x-596-gf2ef1a086a

davdhacs and others added 2 commits April 4, 2026 12:56
Eliminate artifact overhead by compiling Go, building UI, and
assembling the Docker image all in one job. No pre-build dependencies,
no artifact upload/download.

Expected total: Go compile (~30s warm) + Docker build+push (~30s warm)
= ~60s total for the entire build-and-push-main job.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
davdhacs and others added 3 commits April 4, 2026 13:19
Enable cache saves on all events (not just master pushes) to test
truly warm Go compilation in the combined job approach.

With warm GOCACHE + busybox, Go compile should drop from 467s to ~30s.
Combined with 27s docker build+push = ~1m total job time.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant