Skip to content

ROX-30135: Auto-lock process baselines#16564

Merged
JoukoVirtanen merged 45 commits intomasterfrom
jv-ROX-30135-send-baselines-to-sensor-when-deployment-leaves-observation-minimal-change
Sep 3, 2025
Merged

ROX-30135: Auto-lock process baselines#16564
JoukoVirtanen merged 45 commits intomasterfrom
jv-ROX-30135-send-baselines-to-sensor-when-deployment-leaves-observation-minimal-change

Conversation

@JoukoVirtanen
Copy link
Contributor

@JoukoVirtanen JoukoVirtanen commented Aug 28, 2025

Description

When a deployment leaves the observation window a "user" locked process baseline is created for it automatically, which is persisted in the database and sent to sensor. This change is behind a feature flag. This is done in the detection lifecycle manager. Previously the detection lifecycle manager created a stackrox locked baseline and persisted it in the database without sending it to sensor.

The consequence of this change is that when the feature flag is enabled, there will be alerting for anomalous processes after the observation period, whereas before anomalous processes were merely flagged in "Risk".

There are two PRs built upon this PR.

Add process baseline autolocking to cluster config
#16427

Configure process baseline auto locking via helm
#16462

User-facing documentation

Testing and quality

  • the change is production ready: the change is GA, or otherwise the functionality is gated by a feature flag
  • CI results are inspected

ocp-4-19-qa-e2e-tests and ocp-4-12-qa-e2e-tests failed, but they also failed in the nightlies with the same error.

Automated testing

  • added unit tests
  • added e2e tests
  • added regression tests
  • added compatibility tests
  • modified existing tests

How I validated my change

The observation period was set to 3m

export ROX_BASELINE_GENERATION_DURATION=3m

The e2e-test.sh script is used for testing.

It has the following contents
#!/usr/bin/env bash
set -eou pipefail

ROX_ENDPOINT=${1:-https://localhost:8000}

get_process_baseline() {
  query="key.deploymentId=${deployment_id}&key.containerName=${container_name}&key.clusterId=${cluster_id}&key.namespace=${namespace}"

  process_baseline_json="$(curl --location --silent --request GET "${ROX_ENDPOINT}/v1/processbaselines/key?${query}" -k --header "Authorization: Bearer $ROX_API_TOKEN")"

  echo "$process_baseline_json" | jq
}

get_processes() {
  container_json="$(curl --location --silent --request GET "${ROX_ENDPOINT}/v1/processes/deployment/${deployment_id}/grouped/container" -k --header "Authorization: Bearer $ROX_API_TOKEN")"

  echo "$container_json" | jq
}

get_violations() {
  violations_json="$(curl --location --silent --request GET "${ROX_ENDPOINT}/v1/alerts" -k --header "Authorization: Bearer $ROX_API_TOKEN")"

  ubuntu_violations_json="$(echo "$violations_json" | jq '.alerts[] | select(.deployment.id == "'"$deployment_id"'")')"
  process_violations_json="$(echo "$ubuntu_violations_json" | jq 'select(.policy.name == "Unauthorized Process Execution")')"

  violation_id="$(echo "$process_violations_json" | jq -r .id)"

  detailed_violation_json="$(curl --location --silent --request GET "${ROX_ENDPOINT}/v1/alerts/${violation_id}" -k --header "Authorization: Bearer $ROX_API_TOKEN")"

  echo "$detailed_violation_json" | jq
}

lock_process_baseline() {
  data="$(echo "$key" | jq '{
          keys: [
                        .
                ],
                locked: true
          }')"

  process_baselines_json="$(curl --location --silent --request PUT "${ROX_ENDPOINT}/v1/processbaselines/lock" -k --header "Authorization: Bearer $ROX_API_TOKEN" --data "$data")"
}

unlock_process_baseline() {
  data="$(echo "$key" | jq '{
          keys: [
                        .
                ],
                locked: false
          }')"

  process_baselines_json="$(curl --location --silent --request PUT "${ROX_ENDPOINT}/v1/processbaselines/lock" -k --header "Authorization: Bearer $ROX_API_TOKEN" --data "$data")"
}

get_state() {
  echo "Process baseline"
  process_baseline_json="$(get_process_baseline)"
  echo "$process_baseline_json" | jq
  echo
  echo
  echo

  container_json="$(get_processes)"

  echo "Processes"
  echo "$container_json" | jq
  echo
  echo
  echo

  violations_json="$(get_violations)"

  echo "Violations"
  echo "$violations_json" | jq
  echo
  echo
  echo
}

header() {
  echo
  echo "############################################"
  echo
}

wait_time=30

kubectl delete pod ubuntu-pod || true

echo "Creating ubuntu-pod deployment"
kubectl run ubuntu-pod --image=ubuntu --restart=Never --command -- sleep infinity
kubectl wait --for=condition=Ready pod/ubuntu-pod --timeout=300s

kubectl exec ubuntu-pod -it -- cat /proc/1/net/tcp

sleep "$wait_time"

json_deployments="$(curl --location --silent --request GET "${ROX_ENDPOINT}/v1/deploymentswithprocessinfo" -k -H "Authorization: Bearer $ROX_API_TOKEN")"

json_keys="$(echo $json_deployments | jq '{
                       keys: [.deployments[] | .deployment as $d | select(.deployment.name == "ubuntu-pod") | {
                         deployment_id: $d.id,
                         container_name: "ubuntu-pod",
                         cluster_id: $d.clusterId,
                         namespace: "default"
                       }],
                     }')"

key="$(echo $json_keys | jq .keys[0])"
echo "$key" | jq

deployment_id="$(echo $key | jq -r .deployment_id)"
container_name="$(echo $key | jq -r .container_name)"
cluster_id="$(echo $key | jq -r .cluster_id)"
namespace="$(echo $key | jq -r .namespace)"

echo "Initial state"
get_state

echo "Sleep for three minutes"
sleep 3m
if [[ "${MANUALLY_LOCK_PROCESS_BASELINE}" == "true" ]]; then
  lock_process_baseline
fi
echo "Plus a buffer"
sleep "$wait_time"

header
echo "After sleep"
get_state

kubectl exec ubuntu-pod -it -- tac /proc/1/net/tcp
sleep "$wait_time"

header
echo "After tac"
get_state

unlock_process_baseline

sleep "$wait_time"

header
echo "After unlocking process baseline"
get_state

kubectl exec ubuntu-pod -it -- ls /proc/1/net/tcp
sleep "$wait_time"

header
echo "After running a process after unlocking"
get_state

lock_process_baseline

sleep "$wait_time"

header
echo "After manually locking"
get_state

kubectl exec ubuntu-pod -it -- basename /proc/1/net/tcp
sleep "$wait_time"

header
echo "After running a process after manually locking"
get_state

echo "Completed script"

It does the following steps

  • Starts a pod
  • Runs a cat command inside it
  • Sleeps for more than three minutes (Long enough for auto-locking)
  • Runs a tac command inside pod
  • Unlocks the process baseline
  • Runs a ls command inside the pod
  • Locks the process baseline
  • Runs a basename command inside the pod

After each step there are API calls to check the status of the baseline, the processes that are associated with the pod and if they are anomalous, and alerts.

  • After running the cat command the following is the state

The process baseline is unlocked, with the sleep and cat commands in the baseline. Neither of the processes are anomalous. There are no unauthorized process violations for the pod.

  • After sleeping for more than three minutes

The process baseline is locked. There is no change to the processes or violations.

  • After running the tac command

The process baseline is unchanged. The tac command is listed as anomalous, and appears in violations.

  • After unlocking the process baseline

The process baseline is unlocked. There are no other changes.

  • After running the ls command

The ls command is anomalous, but does not appear in violations.

  • After locking the process baseline

The baseline is locked. There are no other changes.

  • After the basename command is run

The basename process is anomalous and shows up in violations with the tac process.

With the feature flag disabled

  • After running the cat command the following is the state

The same as when the feature flag was enabled.

The process baseline is unlocked, with the sleep and cat commands in the baseline. Neither of the processes are anomalous. There are no unauthorized process violations for the pod.

  • After sleeping for more than three minutes

No change. The process baseline remains unlocked. This is different from the case where the feature flag was enabled.

  • After running the tac command

No change to the process baseline. The tac command is anomalous, but does not show up in violations.

  • After unlocking the process baseline

The process baseline was already unlocked so there is no change.

  • After running the ls command

The ls command is anomalous, but does not show up in violations.

  • After locking the process baseline

The process baseline is locked. There are no other changes.

  • After the basename command is run

The basename process is anomalous and appears in violations.

Testing on master

The results on master were the same as running on this branch with feature flag disabled.

Testing on this branch with feature flag disabled and manually locking after three minutes

The results were the same as having the feature flag enabled on this branch and running with manually locking the process baseline after three minutes on master.

Testing on master with manually locking after three minutes

The results were the same as with this branch and the feature flag enabled.

Testing with early locking

The observation period was increased to 5m

export ROX_BASELINE_GENERATION_DURATION=5m

The following script was used for testing what happens when a process baseline is locked early

e2e-early-lock.sh
#!/usr/bin/env bash
set -eou pipefail

ROX_ENDPOINT=${1:-https://localhost:8000}

get_process_baseline() {
  query="key.deploymentId=${deployment_id}&key.containerName=${container_name}&key.clusterId=${cluster_id}&key.namespace=${namespace}"

  process_baseline_json="$(curl --location --silent --request GET "${ROX_ENDPOINT}/v1/processbaselines/key?${query}" -k --header "Authorization: Bearer $ROX_API_TOKEN")"

  echo "$process_baseline_json" | jq
}

get_processes() {
  container_json="$(curl --location --silent --request GET "${ROX_ENDPOINT}/v1/processes/deployment/${deployment_id}/grouped/container" -k --header "Authorization: Bearer $ROX_API_TOKEN")"

  echo "$container_json" | jq
}

get_violations() {
  violations_json="$(curl --location --silent --request GET "${ROX_ENDPOINT}/v1/alerts" -k --header "Authorization: Bearer $ROX_API_TOKEN")"

  ubuntu_violations_json="$(echo "$violations_json" | jq '.alerts[] | select(.deployment.id == "'"$deployment_id"'")')"
  process_violations_json="$(echo "$ubuntu_violations_json" | jq 'select(.policy.name == "Unauthorized Process Execution")')"

  violation_id="$(echo "$process_violations_json" | jq -r .id)"

  detailed_violation_json="$(curl --location --silent --request GET "${ROX_ENDPOINT}/v1/alerts/${violation_id}" -k --header "Authorization: Bearer $ROX_API_TOKEN")"

  echo "$detailed_violation_json" | jq
}

lock_process_baseline() {
  data="$(echo "$key" | jq '{
          keys: [
                        .
                ],
                locked: true
        }')"

  process_baselines_json="$(curl --location --silent --request PUT "${ROX_ENDPOINT}/v1/processbaselines/lock" -k --header "Authorization: Bearer $ROX_API_TOKEN" --data "$data")"
}

unlock_process_baseline() {
  data="$(echo "$key" | jq '{
          keys: [
                        .
                ],
                locked: false
        }')"

  process_baselines_json="$(curl --location --silent --request PUT "${ROX_ENDPOINT}/v1/processbaselines/lock" -k --header "Authorization: Bearer $ROX_API_TOKEN" --data "$data")"
}

get_state() {
  echo "Process baseline"
  process_baseline_json="$(get_process_baseline)"
  echo "$process_baseline_json" | jq
  echo
  echo
  echo

  container_json="$(get_processes)"

  echo "Processes"
  echo "$container_json" | jq
  echo
  echo
  echo

  violations_json="$(get_violations)"

  echo "Violations"
  echo "$violations_json" | jq
  echo
  echo
  echo
}

header() {
  echo
  echo "############################################"
  echo
}

wait_time=30

kubectl delete pod ubuntu-pod || true

echo "Creating ubuntu-pod deployment"
kubectl run ubuntu-pod --image=ubuntu --restart=Never --command -- sleep infinity
kubectl wait --for=condition=Ready pod/ubuntu-pod --timeout=300s

kubectl exec ubuntu-pod -it -- cat /proc/1/net/tcp

sleep "$wait_time"

json_deployments="$(curl --location --silent --request GET "${ROX_ENDPOINT}/v1/deploymentswithprocessinfo" -k -H "Authorization: Bearer $ROX_API_TOKEN")"

json_keys="$(echo $json_deployments | jq '{
                       keys: [.deployments[] | .deployment as $d | select(.deployment.name == "ubuntu-pod") | {
                         deployment_id: $d.id,
                         container_name: "ubuntu-pod",
                         cluster_id: $d.clusterId,
                         namespace: "default"
                       }],
                     }')"

key="$(echo $json_keys | jq .keys[0])"
echo "$key" | jq

deployment_id="$(echo $key | jq -r .deployment_id)"
container_name="$(echo $key | jq -r .container_name)"
cluster_id="$(echo $key | jq -r .cluster_id)"
namespace="$(echo $key | jq -r .namespace)"

header
echo "Initial state"
get_state

echo "Sleep for one minutes"
sleep 1m

lock_process_baseline

header
echo "Locking early"
get_state

kubectl exec ubuntu-pod -it -- tac /proc/1/net/tcp
sleep "$wait_time"

header
echo "After tac after locking early"
get_state

unlock_process_baseline

sleep "$wait_time"

header
echo "After unlocking process baseline"
get_state

kubectl exec ubuntu-pod -it -- ls /proc/1/net/tcp
sleep "$wait_time"

header
echo "After running a ls after unlocking"
get_state

lock_process_baseline

sleep "$wait_time"

header
echo "After manually locking"
get_state

kubectl exec ubuntu-pod -it -- basename /proc/1/net/tcp
sleep "$wait_time"

header
echo "After running basename after manually locking"
get_state

sleep 3m

header
echo "After sleeping for 3m"
get_state

echo "Completed script"

The script used for testing did the following steps.

  • Starts a pod
  • Runs a cat command inside it
  • Sleeps for 1 minute and then locks the process baseline
  • Runs a tac command inside pod
  • Unlocks the process baseline
  • Runs a ls command inside the pod
  • Locks the process baseline
  • Runs a basename command inside the pod
  • Sleeps for three minutes (With the other sleeps should be enough time to reach the auto-lock time)

This branch with feature flag enabled

  • After running the cat command the following is the state

The process baseline is unlocked, with cat and sleep in the baseline. Neither process is anomalous. There are no violations.

  • After sleeping for 1 minute and then locking the process baseline

The process baseline is locked. There is no other change.

  • After running a tac command

The tac process is anomalous and appears in violations.

  • After unlocking the process baseline

The process baseline is unlocked. The tac process is no longer considered anomalous.

  • After running a ls command

The ls command appears in the baseline. There continues to be an alert for tac.

  • After manually locking

The process baseline is locked. The tac process becomes anomalous again.

  • After running basename command

The basename process is anomalous. The basename and tac process are both in violations.

  • After sleeping for 3 minutes

The deployment should leave the observation period during this time. There was no change.

Testing early locking with the feature flag disabled

The results were the same as when the feature flag was enabled.

Testing early locking on master

The results were the same as this branch with and without the feature enabled.

@JoukoVirtanen JoukoVirtanen requested a review from a team as a code owner August 28, 2025 06:10
@red-hat-konflux
Copy link
Contributor

Caution

There are some errors in your PipelineRun template.

PipelineRun Error
central-db-on-push CEL expression evaluation error: expression "(\n event == \"push\" && target_branch.matches(\"^(master|release-.*|refs/tags/.*)$\")\n) || (\n event == \"pull_request\" && (\n target_branch.startsWith(\"release-\") ||\n source_branch.matches(\"(konflux|renovate|appstudio|rhtap)\") ||\n body.pull_request.labels.exists(l, l.name == \"konflux-build\")\n )\n)\n" failed to evaluate: no such key: labels
main-on-push CEL expression evaluation error: expression "(\n event == \"push\" && target_branch.matches(\"^(master|release-.*|refs/tags/.*)$\")\n) || (\n event == \"pull_request\" && (\n target_branch.startsWith(\"release-\") ||\n source_branch.matches(\"(konflux|renovate|appstudio|rhtap)\") ||\n body.pull_request.labels.exists(l, l.name == \"konflux-build\")\n )\n)\n" failed to evaluate: no such key: labels
operator-on-push CEL expression evaluation error: expression "(\n event == \"push\" && target_branch.matches(\"^(master|release-.*|refs/tags/.*)$\")\n) || (\n event == \"pull_request\" && (\n target_branch.startsWith(\"release-\") ||\n source_branch.matches(\"(konflux|renovate|appstudio|rhtap)\") ||\n body.pull_request.labels.exists(l, l.name == \"konflux-build\")\n )\n)\n" failed to evaluate: no such key: labels
operator-bundle-on-push CEL expression evaluation error: expression "(\n event == \"push\" && target_branch.matches(\"^(master|release-.*|refs/tags/.*)$\")\n) || (\n event == \"pull_request\" && (\n target_branch.startsWith(\"release-\") ||\n source_branch.matches(\"(konflux|renovate|appstudio|rhtap)\") ||\n body.pull_request.labels.exists(l, l.name == \"konflux-build\")\n )\n)\n" failed to evaluate: no such key: labels
retag-collector CEL expression evaluation error: expression "(\n event == \"push\" && target_branch.matches(\"^(master|release-.*|refs/tags/.*)$\")\n) || (\n event == \"pull_request\" && (\n target_branch.startsWith(\"release-\") ||\n source_branch.matches(\"(konflux|renovate|appstudio|rhtap)\") ||\n body.pull_request.labels.exists(l, l.name == \"konflux-build\")\n )\n)\n" failed to evaluate: no such key: labels
retag-scanner-db-slim CEL expression evaluation error: expression "(\n event == \"push\" && target_branch.matches(\"^(master|release-.*|refs/tags/.*)$\")\n) || (\n event == \"pull_request\" && (\n target_branch.startsWith(\"release-\") ||\n source_branch.matches(\"(konflux|renovate|appstudio|rhtap)\") ||\n body.pull_request.labels.exists(l, l.name == \"konflux-build\")\n )\n)\n" failed to evaluate: no such key: labels
retag-scanner-db CEL expression evaluation error: expression "(\n event == \"push\" && target_branch.matches(\"^(master|release-.*|refs/tags/.*)$\")\n) || (\n event == \"pull_request\" && (\n target_branch.startsWith(\"release-\") ||\n source_branch.matches(\"(konflux|renovate|appstudio|rhtap)\") ||\n body.pull_request.labels.exists(l, l.name == \"konflux-build\")\n )\n)\n" failed to evaluate: no such key: labels
retag-scanner-slim CEL expression evaluation error: expression "(\n event == \"push\" && target_branch.matches(\"^(master|release-.*|refs/tags/.*)$\")\n) || (\n event == \"pull_request\" && (\n target_branch.startsWith(\"release-\") ||\n source_branch.matches(\"(konflux|renovate|appstudio|rhtap)\") ||\n body.pull_request.labels.exists(l, l.name == \"konflux-build\")\n )\n)\n" failed to evaluate: no such key: labels
retag-scanner CEL expression evaluation error: expression "(\n event == \"push\" && target_branch.matches(\"^(master|release-.*|refs/tags/.*)$\")\n) || (\n event == \"pull_request\" && (\n target_branch.startsWith(\"release-\") ||\n source_branch.matches(\"(konflux|renovate|appstudio|rhtap)\") ||\n body.pull_request.labels.exists(l, l.name == \"konflux-build\")\n )\n)\n" failed to evaluate: no such key: labels
roxctl-on-push CEL expression evaluation error: expression "(\n event == \"push\" && target_branch.matches(\"^(master|release-.*|refs/tags/.*)$\")\n) || (\n event == \"pull_request\" && (\n target_branch.startsWith(\"release-\") ||\n source_branch.matches(\"(konflux|renovate|appstudio|rhtap)\") ||\n body.pull_request.labels.exists(l, l.name == \"konflux-build\")\n )\n)\n" failed to evaluate: no such key: labels
scanner-v4-on-push CEL expression evaluation error: expression "(\n event == \"push\" && target_branch.matches(\"^(master|release-.*|refs/tags/.*)$\")\n) || (\n event == \"pull_request\" && (\n target_branch.startsWith(\"release-\") ||\n source_branch.matches(\"(konflux|renovate|appstudio|rhtap)\") ||\n body.pull_request.labels.exists(l, l.name == \"konflux-build\")\n )\n)\n" failed to evaluate: no such key: labels
scanner-v4-db-on-push CEL expression evaluation error: expression "(\n event == \"push\" && target_branch.matches(\"^(master|release-.*|refs/tags/.*)$\")\n) || (\n event == \"pull_request\" && (\n target_branch.startsWith(\"release-\") ||\n source_branch.matches(\"(konflux|renovate|appstudio|rhtap)\") ||\n body.pull_request.labels.exists(l, l.name == \"konflux-build\")\n )\n)\n" failed to evaluate: no such key: labels

@rhacs-bot
Copy link
Contributor

rhacs-bot commented Aug 28, 2025

Images are ready for the commit at 5592667.

To use with deploy scripts, first export MAIN_IMAGE_TAG=4.9.x-696-g5592667999.

@codecov
Copy link

codecov bot commented Aug 28, 2025

Codecov Report

❌ Patch coverage is 19.40299% with 54 lines in your changes missing coverage. Please review.
✅ Project coverage is 48.71%. Comparing base (2f72d06) to head (5592667).
⚠️ Report is 2 commits behind head on master.

Files with missing lines Patch % Lines
central/detection/lifecycle/manager_impl.go 14.28% 42 Missing ⚠️
central/processbaseline/service/service_impl.go 12.50% 6 Missing and 1 partial ⚠️
central/detection/lifecycle/manager.go 0.00% 3 Missing ⚠️
central/detection/lifecycle/singleton.go 0.00% 1 Missing ⚠️
central/processbaseline/service/singleton.go 0.00% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master   #16564      +/-   ##
==========================================
- Coverage   48.72%   48.71%   -0.01%     
==========================================
  Files        2658     2659       +1     
  Lines      198307   198531     +224     
==========================================
+ Hits        96623    96724     +101     
- Misses      94114    94224     +110     
- Partials     7570     7583      +13     
Flag Coverage Δ
go-unit-tests 48.71% <19.40%> (-0.01%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@JoukoVirtanen JoukoVirtanen requested a review from janisz as a code owner August 29, 2025 04:51
@JoukoVirtanen
Copy link
Contributor Author

/test ocp-4-12-qa-e2e-tests

@JoukoVirtanen JoukoVirtanen force-pushed the jv-ROX-30135-send-baselines-to-sensor-when-deployment-leaves-observation-minimal-change branch from 132c949 to e023cb1 Compare September 2, 2025 21:05
237bfd8 Lifecycle manager sends baselines to sensor
eee08dc Beter separation of baseline creation and inserting them into the database
30d4c46 Cleanup
f8fcb4b Added a feature flag
f092fcc Only setting the user lock timestamp in detection lifecycle manager if the autolock feature flag is enabled
278b1bc Creating message separate from sending it
01ac856 Not sending baselines to sensor if they already exists and are locked
@JoukoVirtanen JoukoVirtanen force-pushed the jv-ROX-30135-send-baselines-to-sensor-when-deployment-leaves-observation-minimal-change branch from e023cb1 to bbd664c Compare September 2, 2025 21:08
@JoukoVirtanen JoukoVirtanen changed the title ROX-30135: Send baselines to sensor when deployment leaves observation minimal change ROX-30135: Send baselines to sensor when deployment leaves observation 2 Sep 2, 2025
@JoukoVirtanen
Copy link
Contributor Author

/test ocp-4-19-qa-e2e-tests
/test ocp-4-12-qa-e2e-tests
/test gke-qa-e2e-tests

@JoukoVirtanen JoukoVirtanen changed the title ROX-30135: Send baselines to sensor when deployment leaves observation 2 ROX-30135: Auto-lock process baselines Sep 3, 2025
@JoukoVirtanen
Copy link
Contributor Author

/test ocp-4-19-qa-e2e-tests
/test ocp-4-12-qa-e2e-tests

@openshift-ci
Copy link

openshift-ci bot commented Sep 3, 2025

@JoukoVirtanen: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/ocp-4-19-qa-e2e-tests 5592667 link false /test ocp-4-19-qa-e2e-tests
ci/prow/ocp-4-12-qa-e2e-tests 5592667 link false /test ocp-4-12-qa-e2e-tests

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@JoukoVirtanen JoukoVirtanen merged commit 93f7908 into master Sep 3, 2025
94 of 101 checks passed
@JoukoVirtanen JoukoVirtanen deleted the jv-ROX-30135-send-baselines-to-sensor-when-deployment-leaves-observation-minimal-change branch September 3, 2025 15:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants