Skip to content

ROX-30578: Configure process baseline auto locking via helm#16462

Merged
JoukoVirtanen merged 30 commits intomasterfrom
jv-ROX-30578-configure-process-baseline-auto-locking-via-helm
Oct 1, 2025
Merged

ROX-30578: Configure process baseline auto locking via helm#16462
JoukoVirtanen merged 30 commits intomasterfrom
jv-ROX-30578-configure-process-baseline-auto-locking-via-helm

Conversation

@JoukoVirtanen
Copy link
Contributor

@JoukoVirtanen JoukoVirtanen commented Aug 19, 2025

Description

Previously it was made possible to control process baseline auto-locking via the cluster API. This PR makes it so that it can be managed via helm and so that if it is managed by helm, it cannot be managed via API.

Also makes it possible to use the internal scripts to configure helm to enable auto-locking process baselines when the environment variable SECURED_CLUSTER_AUTO_LOCK_PROCESS_BASELINE is set to true.

This PR is built on top of #16669

Add process baseline autolocking to cluster config
#16427

User-facing documentation

Testing and quality

  • the change is production ready: the change is GA, or otherwise the functionality is gated by a feature flag
  • CI results are inspected

Automated testing

  • added unit tests
  • added e2e tests
  • added regression tests
  • added compatibility tests
  • modified existing tests

How I validated my change

Set the following environment variables

export ROX_BASELINE_GENERATION_DURATION=5m
export SENSOR_HELM_DEPLOY=true
export ROX_AUTO_LOCK_PROCESS_BASELINES=true
export SECURED_CLUSTER_AUTO_LOCK_PROCESS_BASELINES=true

Deployed using deploy/deploy-local.sh.

Checked the cluster config via API.

#!/usr/bin/env bash
set -eou pipefail

ROX_ENDPOINT=${1:-https://localhost:8000}

start_time=$(date +%s)

json_clusters="$(curl --location --silent --request GET "${ROX_ENDPOINT}/v1/clusters" -k -H "Authorization: Bearer $ROX_API_TOKEN")"

echo "$json_clusters" | jq
{
  "clusters": [
    {
      "id": "81d6704e-69d9-4626-a3fe-6eafc1a5c9e1",
      "name": "remote",
      "type": "KUBERNETES_CLUSTER",
      "labels": {},
      "mainImage": "quay.io/rhacs-eng/main",
      "collectorImage": "quay.io/rhacs-eng/collector",
      "centralApiEndpoint": "central.stackrox:443",
      "runtimeSupport": true,
      "collectionMethod": "CORE_BPF",
      "admissionController": true,
      "admissionControllerUpdates": true,
      "admissionControllerEvents": true,

...

      "dynamicConfig": {
        "admissionControllerConfig": {
          "enabled": true,
          "timeoutSeconds": 10,
          "scanInline": true,
          "disableBypass": false,
          "enforceOnUpdates": true
        },
        "registryOverride": "",
        "disableAuditLogs": true,
        "autoLockProcessBaselinesConfig": {
          "enabled": true
        }
      },

...

      "slimCollector": false,
      "helmConfig": {
        "dynamicConfig": {
          "admissionControllerConfig": {
            "enabled": true,
            "timeoutSeconds": 10,
            "scanInline": true,
            "disableBypass": false,
            "enforceOnUpdates": true
          },
          "registryOverride": "",
          "disableAuditLogs": true,
          "autoLockProcessBaselinesConfig": {
            "enabled": true
          }
        },

...

Created a deployment, entered it, and ran a command in it.

kubectl run ubuntu-pod --image=ubuntu --restart=Never --command -- sleep infinity
kubectl exec ubuntu-pod -it -- /bin/bash
cat /proc/1/net/tcp

Waited five minutes and checked "Risk"

image

The process baseline is locked. Other deployments were also locked.

Ran another command in the pod

tac /proc/1/net/tcp
image

There is an alert for the process baseline violation.

Upgrade test

Used the following scripts to deploy ACS using helm on an openshift-4 cluster.

$ cat make-helm-charts.sh 
#!/usr/bin/env bash
set -eou pipefail

cd ../../

tag="$(make tag)"

cd -

repo=${1:-stackrox-io}
#repo=${1:-jvirtane}

docker run --rm -v "$(pwd):/usr/src/stackrox" quay.io/$repo/roxctl:"$tag" helm output central-services --image-defaults opensource --output-dir /usr/src/stackrox/stackrox-central-services-chart-"$tag"

docker run --rm -v "$(pwd):/usr/src/stackrox" quay.io/$repo/roxctl:"$tag" helm output secured-cluster-services --image-defaults opensource --output-dir /usr/src/stackrox/stackrox-secured-cluster-chart-"$tag"
$ cat deploy.sh 
#!/usr/bin/env bash
set -eou pipefail

ARTIFACT_DIR=$1
central1=$2
secured1=$3
central_settings_file=$4
secured_settings_file=$5

./start-central.sh $ARTIFACT_DIR $central1 $central_settings_file
sleep 60
./get-bundle.sh $ARTIFACT_DIR
sleep 10
./start-secured-cluster.sh $ARTIFACT_DIR $secured1 $secured_settings_file
$ cat start-central.sh 
set -eoux pipefail

echo "Starting central and scanner related pods"

artifacts_dir=$1
helm_charts=$2
settings_file=$3

export KUBECONFIG="$artifacts_dir"/kubeconfig
admin_password="$(cat "$artifacts_dir"/kubeadmin-password)"

settings=(
    --namespace stackrox stackrox-central-services --create-namespace "$helm_charts"
    --set central.exposure.route.enabled=true
    --set central.adminPassword.value="$admin_password"
    --set central.persistence.none=true
)

while read -r line; do
    # Skip any empty lines in the settings file
    if [[ -n "$line" ]]; then
        settings+=( $line )
    fi
done < "$settings_file"

helm install "${settings[@]}"
$ cat get-bundle.sh 
#!/usr/bin/env bash
set -eou pipefail

artifacts_dir=$1

echo "Grabing bundle"

export KUBECONFIG="$artifacts_dir/kubeconfig"
#central_password=asdf
central_password="$(cat "$artifacts_dir"/kubeadmin-password)"

rm -f perf-bundle.yml

#url="$(kubectl -n stackrox get routes central -o json | jq -r '.spec.host')"
url="$(oc -n stackrox get routes central -o json | jq -r '.spec.host')"
roxctl -e https://"$url":443 \
    -p "$central_password" central init-bundles generate perf-test \
    --output perf-bundle.yml

The following files were used to specify the helm settings for central and secured cluster.

$ cat process_baselines_central_settings.txt
 --set customize.central.envVars.ROX_BASELINE_GENERATION_DURATION=3m
 --set customize.central.envVars.ROX_AUTOLOCK_PROCESS_BASELINES=true
$ cat process_baselines_settings.txt
 --set clusterName=perf-test
 --set enableOpenShiftMonitoring=true
 --set exposeMonitoring=true
 --set autoLockProcessBaselines.enabled=true

The helm charts were created

./make-helm-charts.sh 

The deploy script was run

./deploy.sh "$ARTIFACT_DIR" stackrox-central-services-chart-4.9.x-779-g4538de45b3 stackrox-secured-cluster-chart-4.9.x-779-g4538de45b3 process_baselines_central_settings.txt process_baselines_settings.txt

After more than three minutes. Checked "Risk" in the UI.

image

The process baselines are locked.

An upgrade was then done which disabled process baseline auto-locking.

The following script was run

$ cat upgrade-secured-cluster.sh 
#!/usr/bin/env bash
set -eou pipefail

secured_cluster_helm_chart=$1
settings_file=$2

settings=(
    --reuse-values -i --namespace stackrox stackrox-secured-cluster-services
    --values perf-bundle.yml
)


while read -r line; do
    # Skip any empty lines in the settings file
    if [[ -n "$line" ]]; then
        settings+=( $line )
    fi
done < "$settings_file"

settings+=( $secured_cluster_helm_chart )

helm upgrade "${settings[@]}"

The input file was

$ cat process_baselines_secured_settings_disabled.txt
 --set autoLockProcessBaselines.enabled=false

To do the upgrade the following command was run

$ ./upgrade-secured-cluster.sh stackrox-secured-cluster-chart-4.9.x-690-g0839a6c2ca process_baselines_secured_settings_disabled.txt

The state of the pods was checked

$ ks get pod
NAME                                  READY   STATUS    RESTARTS        AGE
admission-control-549cdbd46f-4nxts    1/1     Running   0               6m51s
admission-control-549cdbd46f-h6msl    1/1     Running   0               6m51s
admission-control-549cdbd46f-w5gh5    1/1     Running   0               6m51s
central-6c9c484dff-glqmx              1/1     Running   0               8m28s
central-db-58c674669d-pf5n7           1/1     Running   0               8m29s
collector-4b6rk                       3/3     Running   0               6m51s
collector-55w26                       3/3     Running   0               6m51s
collector-7bhhv                       3/3     Running   0               6m51s
collector-bwqkf                       3/3     Running   0               6m51s
collector-jw9vw                       3/3     Running   0               6m51s
collector-s79jg                       3/3     Running   0               6m51s
config-controller-5cd88b975b-hmcmj    1/1     Running   0               8m28s
scanner-7696bf5449-gvv4k              1/1     Running   0               8m28s
scanner-7696bf5449-hgh5j              1/1     Running   0               8m28s
scanner-db-c84b6f984-kj62z            1/1     Running   0               8m28s
scanner-v4-db-687f866645-6fg2v        1/1     Running   0               8m28s
scanner-v4-indexer-cb649859-hqdzs     1/1     Running   2 (8m17s ago)   8m27s
scanner-v4-indexer-cb649859-l4s6w     1/1     Running   2 (8m23s ago)   8m27s
scanner-v4-matcher-56f8494d9f-bd6vf   1/1     Running   2 (8m17s ago)   8m27s
scanner-v4-matcher-56f8494d9f-cprk7   1/1     Running   3 (8m6s ago)    8m27s
sensor-597855f4c7-6xv9d               1/1     Running   0               23s

Sensor had restarted, but no other components had restarted.

The API was checked and process baseline auto-locking was disabled.

{
  "clusters": [
    {
      "id": "93ed7f56-ab0b-4de3-b78d-b0fb1e58a41c",
      "name": "perf-test",
      "type": "OPENSHIFT4_CLUSTER",
      "labels": {},
      "mainImage": "quay.io/stackrox-io/main",
      "collectorImage": "quay.io/stackrox-io/collector",
      "centralApiEndpoint": "central.stackrox.svc:443",
      "runtimeSupport": true,
      "collectionMethod": "CORE_BPF",
      "admissionController": true,
      "admissionControllerUpdates": true,
      "admissionControllerEvents": true,

...

      "dynamicConfig": {
        "admissionControllerConfig": {
          "enabled": true,
          "timeoutSeconds": 10,
          "scanInline": true,
          "disableBypass": false,
          "enforceOnUpdates": true
        },
        "registryOverride": "",
        "disableAuditLogs": false,
        "autoLockProcessBaselinesConfig": {
          "enabled": false
        }
      },

...

      "helmConfig": {
        "dynamicConfig": {
          "admissionControllerConfig": {
            "enabled": true,
            "timeoutSeconds": 10,
            "scanInline": true,
            "disableBypass": false,
            "enforceOnUpdates": true
          },
          "registryOverride": "",
          "disableAuditLogs": false,
          "autoLockProcessBaselinesConfig": {
            "enabled": false
          }
        },

...

A pod was created for testing.

kubectl run ubuntu-pod --image=ubuntu --restart=Never --command -- sleep infinity
kubectl exec ubuntu-pod -it -- /bin/bash
cat /proc/1/net/tcp

The UI was checked a little more than three minutes later.

image

The baseline was still unlocked as expected.

Checking helm templates and files

Ran

./make-helm-charts.sh

Took a look at the output

cd stackrox-secured-cluster-chart-4.9.x-895-g8e784050d1

internal/defaults/30-base-config.yaml was missing autoLockProcessBaselines, as expected.

auditLogs:
  disableCollection: {{ ne ._rox.env.openshift 4 }}

network:
  enableNetworkPolicies: true

./internal/cluster-config.yaml.tpl

was also missing autoLockProcessBaselines as expected

  dynamicConfig:
    disableAuditLogs: {{ ._rox.auditLogs.disableCollection | not | not }}
    admissionControllerConfig:
      enabled: {{ ._rox.admissionControl.dynamic.enforceOnCreates }}
      timeoutSeconds: {{ ._rox.admissionControl.dynamic.timeout }}
      scanInline: {{ ._rox.admissionControl.dynamic.scanInline }}
      disableBypass: {{ ._rox.admissionControl.dynamic.disableBypass }}
      enforceOnUpdates: {{ ._rox.admissionControl.dynamic.enforceOnUpdates }}
    registryOverride: {{ ._rox.registryOverride }}

The secured cluster was deployed without setting --set autoLockProcessBaselines.enabled.

The helm-cluster-config secret was checked

ks get secret helm-cluster-config -o yaml

The base64 encoded secret was decoded.

clusterName: perf-test
managedBy: MANAGER_TYPE_HELM_CHART
clusterConfig:
  staticConfig:
    type: KUBERNETES_CLUSTER
    mainImage: quay.io/stackrox-io/main
    collectorImage: quay.io/stackrox-io/collector
    centralApiEndpoint: central.stackrox.svc:443
    collectionMethod: CORE_BPF
    admissionController: true
    admissionControllerUpdates: true
    admissionControllerEvents: true
    admissionControllerFailOnError: false
    tolerationsConfig:
      disabled: false
    slimCollector: false
  dynamicConfig:
    disableAuditLogs: true
    admissionControllerConfig:
      enabled: true
      timeoutSeconds: 10
      scanInline: true
      disableBypass: false
      enforceOnUpdates: true
    registryOverride: 
  configFingerprint: 7c6f7708de781f3fdd28463fb19e1cfbe30808aac24dd2f2bc50cc12c4cd1e9c
  clusterLabels:
    null

autoLockProcessBaselines is missing as expected.

The test was repeated with --set autoLockProcessBaselines.enabled=true.

The secret was checked again

clusterName: perf-test
managedBy: MANAGER_TYPE_HELM_CHART
clusterConfig:
  staticConfig:
    type: KUBERNETES_CLUSTER
    mainImage: quay.io/stackrox-io/main
    collectorImage: quay.io/stackrox-io/collector
    centralApiEndpoint: central.stackrox.svc:443
    collectionMethod: CORE_BPF
    admissionController: true
    admissionControllerUpdates: true
    admissionControllerEvents: true
    admissionControllerFailOnError: false
    tolerationsConfig:
      disabled: false
    slimCollector: false
  dynamicConfig:
    disableAuditLogs: true
    admissionControllerConfig:
      enabled: true
      timeoutSeconds: 10
      scanInline: true
      disableBypass: false
      enforceOnUpdates: true
    registryOverride: 
  configFingerprint: 6dd8932f8f5f866e767f960b9dc4009913a3f95da1fb1757b6d810054dda05dd
  clusterLabels:
    null

As expected autoLockProcessBaselines is missing.

Testing with the feature flag enabled

make-helm-charts.sh was altered to the following

#!/usr/bin/env bash
set -eou pipefail

cd ../../

tag="$(make tag)"

cd -

repo=${1:-stackrox-io}

docker run --rm -v "$(pwd):/usr/src/stackrox" quay.io/$repo/roxctl:"$tag" helm output central-services --image-defaults opensource --output-dir /usr/src/stackrox/stackrox-central-services-chart-"$tag"

docker run --rm -v "$(pwd):/usr/src/stackrox" -e ROX_AUTO_LOCK_PROCESS_BASELINES=true quay.io/$repo/roxctl:"$tag" helm output secured-cluster-services --image-defaults opensource --output-dir /usr/src/stackrox/stackrox-secured-cluster-chart-"$tag"

Thus enabling the feature flag during the instantiation of the helm charts.

The helm chart templates were checked

cd stackrox-secured-cluster-chart-4.9.x-895-g8e784050d1

internal/defaults/30-base-config.yaml had autoLockProcessBaselines

auditLogs:
  disableCollection: {{ ne ._rox.env.openshift 4 }}
autoLockProcessBaselines:
  enabled: false

network:
  enableNetworkPolicies: true

internal/cluster-config.yaml.tpl had autoLockProcessBaselines

  dynamicConfig:
    disableAuditLogs: {{ ._rox.auditLogs.disableCollection | not | not }}
    admissionControllerConfig:
      enabled: {{ ._rox.admissionControl.dynamic.enforceOnCreates }}
      timeoutSeconds: {{ ._rox.admissionControl.dynamic.timeout }}
      scanInline: {{ ._rox.admissionControl.dynamic.scanInline }}
      disableBypass: {{ ._rox.admissionControl.dynamic.disableBypass }}
      enforceOnUpdates: {{ ._rox.admissionControl.dynamic.enforceOnUpdates }}
    registryOverride: {{ ._rox.registryOverride }}
    autoLockProcessBaselinesConfig:
      enabled: {{ ._rox.autoLockProcessBaselines.enabled }}
  configFingerprint: {{ ._rox._configFP }}
  clusterLabels: {{- toYaml ._rox.clusterLabels | nindent 4 }}

The secured cluster was deployed with --set autoLockProcessBaselines.enabled=true.

The helm-cluster-config shows that the feature is enabled

clusterName: perf-test
managedBy: MANAGER_TYPE_HELM_CHART
clusterConfig:
  staticConfig:
    type: KUBERNETES_CLUSTER
    mainImage: quay.io/stackrox-io/main
    collectorImage: quay.io/stackrox-io/collector
    centralApiEndpoint: central.stackrox.svc:443
    collectionMethod: CORE_BPF
    admissionController: true
    admissionControllerUpdates: true
    admissionControllerEvents: true
    admissionControllerFailOnError: false
    tolerationsConfig:
      disabled: false
    slimCollector: false
  dynamicConfig:
    disableAuditLogs: true
    admissionControllerConfig:
      enabled: true
      timeoutSeconds: 10
      scanInline: true
      disableBypass: false
      enforceOnUpdates: true
    registryOverride: 
    autoLockProcessBaselinesConfig:
      enabled: true
  configFingerprint: 6dd8932f8f5f866e767f960b9dc4009913a3f95da1fb1757b6d810054dda05dd
  clusterLabels:
    null

@openshift-ci
Copy link

openshift-ci bot commented Aug 19, 2025

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

@red-hat-konflux
Copy link
Contributor

Caution

There are some errors in your PipelineRun template.

PipelineRun Error
central-db-on-push CEL expression evaluation error: expression "(\n event == \"push\" && target_branch.matches(\"^(master|release-.*|refs/tags/.*)$\")\n) || (\n event == \"pull_request\" && (\n target_branch.startsWith(\"release-\") ||\n source_branch.matches(\"(konflux|renovate|appstudio|rhtap)\") ||\n body.pull_request.labels.exists(l, l.name == \"konflux-build\")\n )\n)\n" failed to evaluate: no such key: labels
main-on-push CEL expression evaluation error: expression "(\n event == \"push\" && target_branch.matches(\"^(master|release-.*|refs/tags/.*)$\")\n) || (\n event == \"pull_request\" && (\n target_branch.startsWith(\"release-\") ||\n source_branch.matches(\"(konflux|renovate|appstudio|rhtap)\") ||\n body.pull_request.labels.exists(l, l.name == \"konflux-build\")\n )\n)\n" failed to evaluate: no such key: labels
operator-on-push CEL expression evaluation error: expression "(\n event == \"push\" && target_branch.matches(\"^(master|release-.*|refs/tags/.*)$\")\n) || (\n event == \"pull_request\" && (\n target_branch.startsWith(\"release-\") ||\n source_branch.matches(\"(konflux|renovate|appstudio|rhtap)\") ||\n body.pull_request.labels.exists(l, l.name == \"konflux-build\")\n )\n)\n" failed to evaluate: no such key: labels
operator-bundle-on-push CEL expression evaluation error: expression "(\n event == \"push\" && target_branch.matches(\"^(master|release-.*|refs/tags/.*)$\")\n) || (\n event == \"pull_request\" && (\n target_branch.startsWith(\"release-\") ||\n source_branch.matches(\"(konflux|renovate|appstudio|rhtap)\") ||\n body.pull_request.labels.exists(l, l.name == \"konflux-build\")\n )\n)\n" failed to evaluate: no such key: labels
retag-collector CEL expression evaluation error: expression "(\n event == \"push\" && target_branch.matches(\"^(master|release-.*|refs/tags/.*)$\")\n) || (\n event == \"pull_request\" && (\n target_branch.startsWith(\"release-\") ||\n source_branch.matches(\"(konflux|renovate|appstudio|rhtap)\") ||\n body.pull_request.labels.exists(l, l.name == \"konflux-build\")\n )\n)\n" failed to evaluate: no such key: labels
retag-scanner-db-slim CEL expression evaluation error: expression "(\n event == \"push\" && target_branch.matches(\"^(master|release-.*|refs/tags/.*)$\")\n) || (\n event == \"pull_request\" && (\n target_branch.startsWith(\"release-\") ||\n source_branch.matches(\"(konflux|renovate|appstudio|rhtap)\") ||\n body.pull_request.labels.exists(l, l.name == \"konflux-build\")\n )\n)\n" failed to evaluate: no such key: labels
retag-scanner-db CEL expression evaluation error: expression "(\n event == \"push\" && target_branch.matches(\"^(master|release-.*|refs/tags/.*)$\")\n) || (\n event == \"pull_request\" && (\n target_branch.startsWith(\"release-\") ||\n source_branch.matches(\"(konflux|renovate|appstudio|rhtap)\") ||\n body.pull_request.labels.exists(l, l.name == \"konflux-build\")\n )\n)\n" failed to evaluate: no such key: labels
retag-scanner-slim CEL expression evaluation error: expression "(\n event == \"push\" && target_branch.matches(\"^(master|release-.*|refs/tags/.*)$\")\n) || (\n event == \"pull_request\" && (\n target_branch.startsWith(\"release-\") ||\n source_branch.matches(\"(konflux|renovate|appstudio|rhtap)\") ||\n body.pull_request.labels.exists(l, l.name == \"konflux-build\")\n )\n)\n" failed to evaluate: no such key: labels
retag-scanner CEL expression evaluation error: expression "(\n event == \"push\" && target_branch.matches(\"^(master|release-.*|refs/tags/.*)$\")\n) || (\n event == \"pull_request\" && (\n target_branch.startsWith(\"release-\") ||\n source_branch.matches(\"(konflux|renovate|appstudio|rhtap)\") ||\n body.pull_request.labels.exists(l, l.name == \"konflux-build\")\n )\n)\n" failed to evaluate: no such key: labels
roxctl-on-push CEL expression evaluation error: expression "(\n event == \"push\" && target_branch.matches(\"^(master|release-.*|refs/tags/.*)$\")\n) || (\n event == \"pull_request\" && (\n target_branch.startsWith(\"release-\") ||\n source_branch.matches(\"(konflux|renovate|appstudio|rhtap)\") ||\n body.pull_request.labels.exists(l, l.name == \"konflux-build\")\n )\n)\n" failed to evaluate: no such key: labels
scanner-v4-on-push CEL expression evaluation error: expression "(\n event == \"push\" && target_branch.matches(\"^(master|release-.*|refs/tags/.*)$\")\n) || (\n event == \"pull_request\" && (\n target_branch.startsWith(\"release-\") ||\n source_branch.matches(\"(konflux|renovate|appstudio|rhtap)\") ||\n body.pull_request.labels.exists(l, l.name == \"konflux-build\")\n )\n)\n" failed to evaluate: no such key: labels
scanner-v4-db-on-push CEL expression evaluation error: expression "(\n event == \"push\" && target_branch.matches(\"^(master|release-.*|refs/tags/.*)$\")\n) || (\n event == \"pull_request\" && (\n target_branch.startsWith(\"release-\") ||\n source_branch.matches(\"(konflux|renovate|appstudio|rhtap)\") ||\n body.pull_request.labels.exists(l, l.name == \"konflux-build\")\n )\n)\n" failed to evaluate: no such key: labels

@JoukoVirtanen JoukoVirtanen changed the base branch from master to jv-add-proceess-baseline-autolocking-to-cluster-config August 19, 2025 23:28
@rhacs-bot
Copy link
Contributor

rhacs-bot commented Aug 20, 2025

Images are ready for the commit at 2917cf7.

To use with deploy scripts, first export MAIN_IMAGE_TAG=4.9.x-966-g2917cf7116.

@codecov
Copy link

codecov bot commented Aug 20, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 48.78%. Comparing base (138c1c3) to head (2917cf7).
⚠️ Report is 2 commits behind head on master.

Additional details and impacted files
@@            Coverage Diff             @@
##           master   #16462      +/-   ##
==========================================
- Coverage   48.79%   48.78%   -0.01%     
==========================================
  Files        2712     2712              
  Lines      202332   202335       +3     
==========================================
- Hits        98731    98717      -14     
- Misses      95817    95830      +13     
- Partials     7784     7788       +4     
Flag Coverage Δ
go-unit-tests 48.78% <100.00%> (-0.01%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@JoukoVirtanen JoukoVirtanen force-pushed the jv-add-proceess-baseline-autolocking-to-cluster-config branch from 27908d7 to 27e1369 Compare August 20, 2025 23:13
@JoukoVirtanen JoukoVirtanen requested review from a team as code owners August 20, 2025 23:13
@JoukoVirtanen JoukoVirtanen force-pushed the jv-ROX-30578-configure-process-baseline-auto-locking-via-helm branch from faba628 to 37f5da3 Compare August 20, 2025 23:24
Copy link
Contributor

@ajheflin ajheflin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM from the CWF side. Would like to get an install approval as well, though

@JoukoVirtanen JoukoVirtanen force-pushed the jv-ROX-30578-configure-process-baseline-auto-locking-via-helm branch from a6ee31c to 3e60e4a Compare September 5, 2025 21:25
@JoukoVirtanen JoukoVirtanen requested review from a team as code owners September 5, 2025 21:25
@JoukoVirtanen JoukoVirtanen force-pushed the jv-ROX-30578-configure-process-baseline-auto-locking-via-helm branch from 6cb6343 to 2917cf7 Compare October 1, 2025 12:27
Copy link
Contributor

@mclasmeier mclasmeier left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@JoukoVirtanen JoukoVirtanen dismissed clickboo’s stale review October 1, 2025 12:42

Khusboo messaged me "If Moritz approves please feel free to dismiss my review to unblock your merge."

@JoukoVirtanen JoukoVirtanen enabled auto-merge (squash) October 1, 2025 12:43
@JoukoVirtanen
Copy link
Contributor Author

/test gke-nongroovy-e2e-tests

@JoukoVirtanen JoukoVirtanen merged commit aa7b51b into master Oct 1, 2025
100 of 101 checks passed
@JoukoVirtanen JoukoVirtanen deleted the jv-ROX-30578-configure-process-baseline-auto-locking-via-helm branch October 1, 2025 15:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants