Skip to content

ROX-30959: evaluate baseline benchmark#16932

Merged
dashrews78 merged 2 commits intomasterfrom
dashrews/evaluate-baseline-benchmark-30959
Sep 22, 2025
Merged

ROX-30959: evaluate baseline benchmark#16932
dashrews78 merged 2 commits intomasterfrom
dashrews/evaluate-baseline-benchmark-30959

Conversation

@dashrews78
Copy link
Contributor

@dashrews78 dashrews78 commented Sep 19, 2025

Description

In preparation for the rest of epic ROX-30958 we need a benchmark so we can compare results as the improvements are made. Utilized Cursor to create the vast majority of this, iterating through the scenarios and narrowing the scope as we went.

User-facing documentation

Testing and quality

  • the change is production ready: the change is GA, or otherwise the functionality is gated by a feature flag
  • CI results are inspected

Automated testing

  • added unit tests
  • added e2e tests
  • added regression tests
  • added compatibility tests
  • modified existing tests

How I validated my change

dashrews-mac:evaluator dashrews$  go test -tags sql_integration -bench=. -benchmem -run=^$
goos: darwin
goarch: arm64
pkg: github.com/stackrox/rox/central/processbaseline/evaluator
cpu: Apple M3 Pro
BenchmarkEvaluateBaselinesAndPersistResult/100_processes_2_containers-12         	    4561	    244078 ns/op	  320113 B/op	    2435 allocs/op
BenchmarkEvaluateBaselinesAndPersistResult/500_processes_3_containers-12         	    1712	    910972 ns/op	 1834848 B/op	   12944 allocs/op
BenchmarkEvaluateBaselinesAndPersistResult/1000_processes_5_containers-12        	     644	   2531988 ns/op	 4815912 B/op	   33949 allocs/op
BenchmarkEvaluateBaselinesAndPersistResult/1000_processes_5_containers_large_args-12         	     303	   4671069 ns/op	 9539243 B/op	   54954 allocs/op
BenchmarkEvaluateBaselinesAndPersistResult/2000_processes_10_containers-12                   	     188	   7352438 ns/op	15576853 B/op	   96958 allocs/op
BenchmarkEvaluateBaselinesAndPersistResult/10000_processes_20_containers-12                  	      69	  21889423 ns/op	45721076 B/op	  306961 allocs/op
BenchmarkEvaluateBaselinesAndPersistResult/25000_processes_50_containers-12                  	      30	  45271204 ns/op	97118233 B/op	  656966 allocs/op
BenchmarkEvaluateBaselinesAndPersistResult/50000_processes_100_containers-12                 	      13	  95517465 ns/op	198884133 B/op	 1356970 allocs/op
BenchmarkEvaluateBaselinesAndPersistResult/50000_processes_100_containers_large_args-12      	       6	 240790167 ns/op	365835746 B/op	 2056975 allocs/op
BenchmarkEvaluateBaselinesSmallScale/2500_processes_5_containers-12                          	     891	   1283776 ns/op	 2504541 B/op	   17843 allocs/op
PASS
ok  	github.com/stackrox/rox/central/processbaseline/evaluator	30.195s

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey there - I've reviewed your changes - here's some feedback:

  • Consider using a fixed base time instead of time.Now() in generateProcessIndicators to make benchmark timings deterministic and reproducible.
  • Reset or truncate the test database between benchmark iterations since EvaluateBaselinesAndPersistResult persists data and can skew subsequent run results.
  • The generateProcessIndicators function is quite large—consider extracting or refactoring it into shared test utilities to improve readability.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- Consider using a fixed base time instead of time.Now() in generateProcessIndicators to make benchmark timings deterministic and reproducible.
- Reset or truncate the test database between benchmark iterations since EvaluateBaselinesAndPersistResult persists data and can skew subsequent run results.
- The generateProcessIndicators function is quite large—consider extracting or refactoring it into shared test utilities to improve readability.

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

@rhacs-bot
Copy link
Contributor

Images are ready for the commit at 4da825a.

To use with deploy scripts, first export MAIN_IMAGE_TAG=4.9.x-835-g4da825a5c7.

@codecov
Copy link

codecov bot commented Sep 19, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 49.02%. Comparing base (e5c1e30) to head (4da825a).
⚠️ Report is 15 commits behind head on master.

Additional details and impacted files
@@            Coverage Diff             @@
##           master   #16932      +/-   ##
==========================================
- Coverage   49.02%   49.02%   -0.01%     
==========================================
  Files        2691     2691              
  Lines      201626   201631       +5     
==========================================
- Hits        98844    98840       -4     
- Misses      95119    95126       +7     
- Partials     7663     7665       +2     
Flag Coverage Δ
go-unit-tests 49.02% <ø> (-0.01%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@dashrews78
Copy link
Contributor Author

@dashrews78 dashrews78 merged commit 004051f into master Sep 22, 2025
100 checks passed
@dashrews78 dashrews78 deleted the dashrews/evaluate-baseline-benchmark-30959 branch September 22, 2025 09:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants