Conversation
Add FuzzParseQuery, FuzzParseQueryForAutocomplete, and FuzzGetValueAndModifiersFromString to exercise the search query parser with arbitrary input. These functions handle user-supplied search strings with modifiers, quoted values, and complex delimiters. AI-assisted
Add FuzzConvertPEMToDERs, FuzzConvertPEMTox509Certs, and FuzzParseCertificateChain to test certificate parsing with malformed PEM/DER data, truncated ASN.1, and non-certificate PEM types. AI-assisted
Add FuzzParseToken to exercise the service certificate token parser with arbitrary base64+protobuf input, ensuring no panics on malformed tokens. AI-assisted
Add FuzzEntityDescriptorsUnmarshal to test custom XML unmarshaling of SAML EntityDescriptor/EntitiesDescriptor with arbitrary XML input, including namespace variations and malformed documents. AI-assisted
Add FuzzSplitState, FuzzParseClientState, and FuzzAttachStateOrEmpty to exercise OAuth state parameter parsing with arbitrary strings, covering mode-specific prefixes and delimiter handling. AI-assisted
Add FuzzParseEndpoint to test endpoint parsing with IPv4, IPv6, zones, and edge cases. Includes corpus entry for "[:]". Documents that FormatEndpoint->ParseEndpoint roundtrip doesn't hold for all inputs (e.g. bare IPv6 without brackets). AI-assisted
Add FuzzParseIP, FuzzIPFromBytes, FuzzIPNetworkFromCIDR, and FuzzParseIPPortPair. Documents known edge case: IPv4-mapped IPv6 CIDRs (e.g. ::ffff:192.168.0.0/96) report as IPv4 family but retain IPv6 prefix lengths. AI-assisted
Add FuzzParseCVSSV3 to test CVSS v3 vector string parsing. Discovered
panic bug: ParseCVSSV3("CVSS:0") triggers "slice bounds out of range"
in upstream cvss3.VectorFromString(). Includes crasher corpus entry.
This is a potential DoS vector if untrusted CVSS strings reach this
function.
AI-assisted
Add 22 fuzz tests covering every regex pattern in value_regex.go: KeyValue, Boolean, String, Integer, ComparatorDecimal, Environment Variable, Dockerfile, Capabilities, RBAC, PortExposure, IPv4, IPv6, Severity, KubernetesName, FileOperation, SignatureIntegration, AuditEvent, KubernetesResource, MountPropagation, SeccompProfile, and CreateRegex. Tests for ReDoS and panics. AI-assisted
Add FuzzParseSource to test PostgreSQL connection string parsing with arbitrary input including special characters in passwords and malformed key=value pairs. AI-assisted
Add FuzzDecodeBytesList and FuzzBinaryEncRoundtrip. The roundtrip test verifies encode-decode consistency: EncodeBytesList(a,b,c) followed by DecodeBytesList must yield [a,b,c]. AI-assisted
Add FuzzFromString, FuzzUnmarshalBinary, and FuzzUnmarshalText to test UUID parsing across standard, braced, and URN formats with roundtrip consistency checks. AI-assisted
Add FuzzParseRFC3339NanoTimestamp to exercise timestamp parsing with arbitrary strings including edge cases around timezones, nanosecond precision, and malformed formats. AI-assisted
Add FuzzParseRef to test K8s object reference parsing with various formats, unicode, and edge cases. Add table-driven unit tests covering valid and invalid inputs. AI-assisted
Add FuzzUnmarshalMitreAttackBundle and FuzzExtractMitreAttackBundle to test JSON parsing of external MITRE ATT&CK data with malformed, truncated, and structurally invalid input. AI-assisted
Add FuzzParseFilenameFromHeader and unit tests for extracting filenames from HTTP Content-Disposition headers, covering path traversal attempts, quoted values, and RFC 5987 encoded filenames. AI-assisted
Add FuzzParseLegacySpec to test parsing of [type@]endpoint specifications with arbitrary strings including multiple endpoints, missing types, and malformed separators. AI-assisted
Add FuzzParseURLQuery to test HTTP URL parameter parsing into query objects with arbitrary query strings. AI-assisted
|
Skipping CI for Draft Pull Request. |
|
| GitGuardian id | GitGuardian status | Secret | Commit | Filename | |
|---|---|---|---|---|---|
| 29359836 | Triggered | Generic Database Assignment | 6dce922 | pkg/postgres/pgconfig/config_test.go | View secret |
| 29359837 | Triggered | Generic Password | 6dce922 | pkg/postgres/pgconfig/config_test.go | View secret |
| 29360254 | Triggered | Generic Password | f8c2863 | pkg/postgres/pgconfig/config_test.go | View secret |
| 29360253 | Triggered | Generic Database Assignment | f8c2863 | pkg/postgres/pgconfig/config_test.go | View secret |
🛠 Guidelines to remediate hardcoded secrets
- Understand the implications of revoking this secret by investigating where it is used in your code.
- Replace and store your secrets safely. Learn here the best practices.
- Revoke and rotate these secrets.
- If possible, rewrite git history. Rewriting git history is not a trivial act. You might completely break other contributing developers' workflow and you risk accidentally deleting legitimate data.
To avoid such incidents in the future consider
- following these best practices for managing and storing secrets including API keys and other credentials
- install secret detection on pre-commit to catch secret before it leaves your machine and ease remediation.
🦉 GitGuardian detects secrets in your source code to help developers and security teams secure the modern development process. You are seeing this because you or someone else with access to this repository has authorized GitGuardian to scan your pull request.
There was a problem hiding this comment.
Hey - I've found 1 issue, and left some high level feedback:
- Many of the fuzz targets wrap the code under test in
defer/recoverorassert.NotPanics, which prevents the Go fuzzing engine from observing real panics and producing crashers; consider removing these panic guards so that panics are surfaced to the fuzzer instead of being treated as test failures. - Several fuzz tests assert very strict behavioral or formatting invariants (for example, round‑tripping
vectorStrexactly inFuzzParseCVSSV3, or assuming specific non‑empty fields after successful parses) that may not be guaranteed by the APIs and could make the fuzzers brittle against legitimate implementation changes; you may want to relax these expectations to focus on safety properties (no panics, basic validity ranges) rather than exact string or struct equality.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- Many of the fuzz targets wrap the code under test in `defer`/`recover` or `assert.NotPanics`, which prevents the Go fuzzing engine from observing real panics and producing crashers; consider removing these panic guards so that panics are surfaced to the fuzzer instead of being treated as test failures.
- Several fuzz tests assert very strict behavioral or formatting invariants (for example, round‑tripping `vectorStr` exactly in `FuzzParseCVSSV3`, or assuming specific non‑empty fields after successful parses) that may not be guaranteed by the APIs and could make the fuzzers brittle against legitimate implementation changes; you may want to relax these expectations to focus on safety properties (no panics, basic validity ranges) rather than exact string or struct equality.
## Individual Comments
### Comment 1
<location path="pkg/booleanpolicy/fuzz_test.go" line_range="10" />
<code_context>
+)
+
+// FuzzKeyValueRegex tests the keyValueValueRegex for ReDoS and correctness
+func FuzzKeyValueRegex(f *testing.F) {
+ // Seed with known valid values
+ f.Add("a=b")
</code_context>
<issue_to_address>
**suggestion (performance):** Per-input goroutine + 100ms timeout in fuzz tests is likely to be very slow and flaky under fuzzing workloads
Starting a goroutine and `time.After(100 * time.Millisecond)` per fuzz input will make this target very slow and prone to flakes once fuzzing ramps up concurrency. On contended CI machines, even normal inputs can exceed 100ms just due to scheduling, causing spurious “took too long” failures unrelated to actual ReDoS.
Please consider alternatives such as:
- Using a tighter, CPU-time–based guard in benchmarks (outside `testing.F`), or
- Using a reusable `context` + matcher wrapper with a very small deadline, or
- Moving ReDoS detection into targeted benchmarks and keeping the fuzz target for panics/correctness only.
If you keep a timeout here, at least share a worker goroutine per fuzz instance (not per input) and/or relax the timeout to reduce flakiness and resource usage.
Suggested implementation:
```golang
"regexp"
"testing"
```
From the partial file, I can’t see the full body of `FuzzKeyValueRegex`, but your comment indicates it currently does something like:
- For each fuzz input, starts a new goroutine.
- Uses `time.After(100 * time.Millisecond)` or similar to enforce a wall-clock timeout to detect potential ReDoS.
- Probably calls `keyValueValueRegex.MatchString` (or similar) inside that goroutine.
To implement your suggestion cleanly, I recommend:
1. **Keep the fuzz test focused on correctness / panics only:**
- Rewrite `FuzzKeyValueRegex` so that it:
- Seeds with your known values (as already shown).
- In `f.Fuzz(func(t *testing.T, s string) { ... })`, just runs the regex match directly:
```go
func FuzzKeyValueRegex(f *testing.F) {
// Seed with known valid values
f.Add("a=b")
f.Add("key=value")
f.Add("1=1")
f.Add(`.*\d=.*`)
// Seed with known invalid values
f.Add("")
f.Add("=")
f.Add("=a=b")
f.Add("no_equals")
f.Fuzz(func(t *testing.T, s string) {
// core operation: must not panic
_ = keyValueValueRegex.MatchString(s)
})
}
```
- Remove any goroutine + `time.After` logic from the fuzz function.
2. **Move ReDoS/speed checks into benchmarks:**
- In the same file, add a benchmark that explicitly probes potentially pathological inputs and measures performance, without per-input goroutine overhead:
```go
func BenchmarkKeyValueRegexReDoS(b *testing.B) {
candidates := []string{
"a=b",
"key=value",
"1=1",
strings.Repeat("a", 1<<10) + "=" + strings.Repeat("b", 1<<10),
// Add any known-evil patterns you’re worried about here.
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
s := candidates[i%len(candidates)]
if !keyValueValueRegex.MatchString(s) {
// We don't care about correctness here, just avoid compiler
// optimizing away the call.
b.Fatalf("unexpected no-match for %q", s)
}
}
}
```
- This moves performance / ReDoS detection out of fuzzing and into benchmarks as suggested.
3. **If you decide to keep a timeout-based guard anyway:**
- Instead of per-input goroutines, you can:
- Use a single worker goroutine per fuzz instance that processes inputs sent over a channel.
- Inside the worker, apply a *very small* per-input deadline using a `context.Context` with `context.WithTimeout`, but **not** per-fuzz-input goroutine creation around `MatchString`.
- However, this is more complex and still susceptible to flakiness; the benchmark approach above is simpler and more robust.
Because I only see the beginning of `FuzzKeyValueRegex`, the maintainer will need to:
- Remove the existing goroutine/timeout logic inside `FuzzKeyValueRegex` and replace it with the simpler direct `MatchString` call as shown.
- Add the new `BenchmarkKeyValueRegexReDoS` (and import `strings` if you use the example benchmark).
- Ensure any now-unused imports (like `time`) are removed; I’ve provided an edit for `time`, but do the same for any others.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| ) | ||
|
|
||
| // FuzzKeyValueRegex tests the keyValueValueRegex for ReDoS and correctness | ||
| func FuzzKeyValueRegex(f *testing.F) { |
There was a problem hiding this comment.
suggestion (performance): Per-input goroutine + 100ms timeout in fuzz tests is likely to be very slow and flaky under fuzzing workloads
Starting a goroutine and time.After(100 * time.Millisecond) per fuzz input will make this target very slow and prone to flakes once fuzzing ramps up concurrency. On contended CI machines, even normal inputs can exceed 100ms just due to scheduling, causing spurious “took too long” failures unrelated to actual ReDoS.
Please consider alternatives such as:
- Using a tighter, CPU-time–based guard in benchmarks (outside
testing.F), or - Using a reusable
context+ matcher wrapper with a very small deadline, or - Moving ReDoS detection into targeted benchmarks and keeping the fuzz target for panics/correctness only.
If you keep a timeout here, at least share a worker goroutine per fuzz instance (not per input) and/or relax the timeout to reduce flakiness and resource usage.
Suggested implementation:
"regexp"
"testing"From the partial file, I can’t see the full body of FuzzKeyValueRegex, but your comment indicates it currently does something like:
- For each fuzz input, starts a new goroutine.
- Uses
time.After(100 * time.Millisecond)or similar to enforce a wall-clock timeout to detect potential ReDoS. - Probably calls
keyValueValueRegex.MatchString(or similar) inside that goroutine.
To implement your suggestion cleanly, I recommend:
-
Keep the fuzz test focused on correctness / panics only:
- Rewrite
FuzzKeyValueRegexso that it:- Seeds with your known values (as already shown).
- In
f.Fuzz(func(t *testing.T, s string) { ... }), just runs the regex match directly:func FuzzKeyValueRegex(f *testing.F) { // Seed with known valid values f.Add("a=b") f.Add("key=value") f.Add("1=1") f.Add(`.*\d=.*`) // Seed with known invalid values f.Add("") f.Add("=") f.Add("=a=b") f.Add("no_equals") f.Fuzz(func(t *testing.T, s string) { // core operation: must not panic _ = keyValueValueRegex.MatchString(s) }) }
- Remove any goroutine +
time.Afterlogic from the fuzz function.
- Rewrite
-
Move ReDoS/speed checks into benchmarks:
- In the same file, add a benchmark that explicitly probes potentially pathological inputs and measures performance, without per-input goroutine overhead:
func BenchmarkKeyValueRegexReDoS(b *testing.B) { candidates := []string{ "a=b", "key=value", "1=1", strings.Repeat("a", 1<<10) + "=" + strings.Repeat("b", 1<<10), // Add any known-evil patterns you’re worried about here. } b.ResetTimer() for i := 0; i < b.N; i++ { s := candidates[i%len(candidates)] if !keyValueValueRegex.MatchString(s) { // We don't care about correctness here, just avoid compiler // optimizing away the call. b.Fatalf("unexpected no-match for %q", s) } } }
- This moves performance / ReDoS detection out of fuzzing and into benchmarks as suggested.
- In the same file, add a benchmark that explicitly probes potentially pathological inputs and measures performance, without per-input goroutine overhead:
-
If you decide to keep a timeout-based guard anyway:
- Instead of per-input goroutines, you can:
- Use a single worker goroutine per fuzz instance that processes inputs sent over a channel.
- Inside the worker, apply a very small per-input deadline using a
context.Contextwithcontext.WithTimeout, but not per-fuzz-input goroutine creation aroundMatchString. - However, this is more complex and still susceptible to flakiness; the benchmark approach above is simpler and more robust.
- Instead of per-input goroutines, you can:
Because I only see the beginning of FuzzKeyValueRegex, the maintainer will need to:
- Remove the existing goroutine/timeout logic inside
FuzzKeyValueRegexand replace it with the simpler directMatchStringcall as shown. - Add the new
BenchmarkKeyValueRegexReDoS(and importstringsif you use the example benchmark). - Ensure any now-unused imports (like
time) are removed; I’ve provided an edit fortime, but do the same for any others.
|
Images are ready for the commit at f8c2863. To use with deploy scripts, first |
- Remove defer/recover wrappers from all fuzz tests so the Go fuzzing engine can observe panics and produce crasher corpus entries. The CVSS test retains its recover since the upstream panic is known. - Remove goroutine+timeout ReDoS detection from all 22 booleanpolicy fuzz tests. Go's regexp uses a linear-time engine making ReDoS impossible; the timeout added flakiness for zero value. - Remove strict invariant assertions (roundtrip equality, field non-emptiness, modifier range checks) that made tests brittle. Fuzz tests now focus on the safety property: no panics. - Remove unused imports (time, assert, require) where applicable. Addresses feedback from sourcery-ai review on PR #19686. AI-assisted
Replace password=secret and password=p@ssw0rd! with clearly fake values (testonly, t3st!) in pgconfig test fixtures to avoid secret detection false positives. AI-assisted
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Central YAML (base), Organization UI (inherited) Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (6)
✅ Files skipped from review due to trivial changes (2)
🚧 Files skipped from review as they are similar to previous changes (3)
📝 WalkthroughSummary by CodeRabbit
WalkthroughAdds extensive fuzz tests and targeted unit tests across multiple packages to exercise parsing, unmarshaling, encoding, and conversion routines against valid, malformed, and edge-case inputs, primarily to ensure functions do not panic and to exercise returned structures. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (8)
pkg/x509utils/fuzz_test.go (1)
167-181: Minor: Redundant empty chain test inside fuzz callback.Line 177 calls
ParseCertificateChain([][]byte{})on every fuzz iteration with identical input. This doesn't exercise varying fuzzed data and adds unnecessary overhead. Consider moving this constant test to seed corpus or test it once outside the fuzz loop.Suggested refactor
f.Fuzz(func(t *testing.T, data []byte) { // Test with single DER certificate chain := [][]byte{data} _, _ = ParseCertificateChain(chain) // Test with multiple copies of the same data multiChain := [][]byte{data, data, data} _, _ = ParseCertificateChain(multiChain) - - // Test with empty chain - _, _ = ParseCertificateChain([][]byte{}) - // Note: We don't assert on errors since malformed input is expected to error // The key is that the function should never panic }) + + // Test empty chain once (constant input, no need to repeat in fuzz loop) + _, _ = ParseCertificateChain([][]byte{}) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/x509utils/fuzz_test.go` around lines 167 - 181, The empty-chain invocation inside the fuzz loop is redundant and wastes cycles; remove the constant call to ParseCertificateChain([][]byte{}) from the f.Fuzz callback in pkg/x509utils/fuzz_test.go and instead add a single non-fuzzed test or seed corpus entry that calls ParseCertificateChain with an empty slice once. Keep the fuzz callback (f.Fuzz) to only exercise varying data (the single DER and multi-copy cases using the fuzzed data) and ensure the standalone test/seed covers the empty-chain behavior.pkg/booleanpolicy/fuzz_test.go (1)
855-884: Consider adding a basic match test after successful compilation.
FuzzCreateRegexonly tests compilation but doesn't exercise the compiled regex with any input. A minimal match test would increase coverage:Optional enhancement
// Attempt to compile the pattern - _, err := regexp.Compile("((?m)^" + pattern + "$)") + re, err := regexp.Compile("((?m)^" + pattern + "$)") if err != nil { // Invalid pattern is expected for fuzz inputs return } - // If compilation succeeds, the test passes + // Basic exercise of the compiled regex + _ = re.MatchString("test") })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/booleanpolicy/fuzz_test.go` around lines 855 - 884, After successfully compiling the regex in FuzzCreateRegex (the regexp.Compile call), exercise the compiled regex by calling its MatchString (or Match) method with a simple sample input to ensure it doesn't panic and behaves as expected; locate the compiled value returned by regexp.Compile (the variable assigned from Compile) and add a minimal assertion that re.MatchString("test") or another small, deterministic string runs without error (returning or logging as appropriate) so the fuzz test covers both compilation and a basic match path.roxctl/common/download/parse_filename_fuzz_test.go (1)
99-103: Avoid locking the fuzz oracle to current strict prefix parsing.At Line 99-Line 103, requiring an error for anything not starting with exact
attachment; filename=makes the fuzz test brittle if parser behavior is legitimately improved (e.g., tolerant casing/spacing). Prefer safety-oriented invariants here and keep exact-format expectations in unit tests.♻️ Suggested adjustment
- // Verify the expected prefix handling - if headerValue != "" && !strings.HasPrefix(headerValue, "attachment; filename=") { - if err == nil { - t.Errorf("expected error for header %q without expected prefix, got filename %q", headerValue, filename) - } - } + // Keep fuzz assertions focused on safety/type invariants. + // Exact header-shape behavior is better enforced in table-driven unit tests.As per coding guidelines, "Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@roxctl/common/download/parse_filename_fuzz_test.go` around lines 99 - 103, The test currently treats any headerValue not starting with the exact string "attachment; filename=" as a failure by asserting err != nil; update the check in the fuzz test so it doesn't require a strict prefix match: remove the branch that expects an error for non-matching prefixes and instead, when err == nil assert only a safe invariant such as filename != "" (or other permissive correctness you're already testing) so the fuzz oracle remains tolerant to improved parsing; locate and change the code around headerValue, strings.HasPrefix(...), err, filename, and the t.Errorf call to implement this weaker assertion.pkg/uuid/uuid_test.go (1)
112-112: Use deterministic seed values in fuzz tests.Using
NewV4()inf.Add(...)makes seed inputs vary per run, which hurts reproducibility and triage consistency for fuzz failures.♻️ Proposed fix
func FuzzFromString(f *testing.F) { @@ - f.Add(NewV4().String()) // random valid UUID + f.Add("123e4567-e89b-12d3-a456-426614174000") // fixed valid UUID @@ func FuzzUnmarshalBinary(f *testing.F) { @@ - f.Add(NewV4().Bytes()) // random UUID + f.Add([]byte{0x12, 0x3e, 0x45, 0x67, 0xe8, 0x9b, 0x12, 0xd3, 0xa4, 0x56, 0x42, 0x66, 0x14, 0x17, 0x40, 0x00}) // fixed UUID bytesAs per coding guidelines, "
**: Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."Also applies to: 148-148
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/uuid/uuid_test.go` at line 112, Replace the non-deterministic seed NewV4().String() used in the fuzz seed f.Add(...) with a deterministic constant UUID string or a parsed UUID constant (e.g., use a literal like "11111111-1111-1111-1111-111111111111" or uuid.MustParse on a fixed value) so fuzz runs are reproducible; locate uses of NewV4() inside f.Add(...) (and the other similar occurrence) and change them to use the fixed constant instead of generating a random V4 UUID at test runtime.pkg/netutil/endpoint_test.go (1)
193-213: Strengthen fuzz oracle and bound input size for better signal/perf.The current
zone/portchecks are tautological, and the success path only checks “no panic.” This can miss parser regressions while spending cycles on oversized mutations.♻️ Proposed refactor
f.Fuzz(func(t *testing.T, endpoint string) { + if len(endpoint) > 4096 { + t.Skip() + } + // The primary goal is to ensure ParseEndpoint never panics, // regardless of input. We don't assert correctness of the parse, // just that it either succeeds or returns an error gracefully. host, zone, port, err := ParseEndpoint(endpoint) // If parsing succeeded, verify basic invariants if err == nil { // Host must not be empty on success assert.NotEmpty(t, host, "host should not be empty when parsing succeeds") - // If zone is present, it must not be empty - if zone != "" { - assert.NotEmpty(t, zone, "zone should not be empty if present") - } - - // If port is present, it must not be empty - if port != "" { - assert.NotEmpty(t, port, "port should not be empty if present") - } - - // NOTE: Round-trip (FormatEndpoint -> ParseEndpoint) doesn't always - // hold because ParseEndpoint accepts formats that FormatEndpoint - // doesn't produce (e.g. bare IPv6 without brackets). We only - // verify FormatEndpoint doesn't panic. - _ = FormatEndpoint(host, zone, port) + // Canonical round-trip oracle: parse -> format -> parse. + formatted := FormatEndpoint(host, zone, port) + rtHost, rtZone, rtPort, rtErr := ParseEndpoint(formatted) + assert.NoError(t, rtErr) + assert.Equal(t, host, rtHost) + assert.Equal(t, zone, rtZone) + assert.Equal(t, port, rtPort) } })As per coding guidelines, "Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/netutil/endpoint_test.go` around lines 193 - 213, The fuzz test's success checks are weak and unbounded; tighten them by (1) rejecting oversized inputs before calling ParseEndpoint (e.g., cap input length) to improve performance, (2) replace tautological checks on non-empty zone/port with stronger assertions such as validating port is a valid numeric port range when present and zone matches expected percent-encoding/format, and (3) where FormatEndpoint is expected to round-trip, assert that ParseEndpoint(FormatEndpoint(host, zone, port)) returns the same host/zone/port for bracketed IPv6 and other formats FormatEndpoint can produce; make these changes around the ParseEndpoint/FormatEndpoint call sites and the host/zone/port variables referenced in the diff.pkg/mitre/parser_fuzz_test.go (1)
11-120: Preserve the curated “valid” seeds with a semantic assertion.These seeds are described as minimal valid bundles, but the target only checks that parsing does not panic. If
UnmarshalAndExtractMitreAttackBundlestarts returning an error for every valid seed, this fuzz test still stays green and the useful regression signal disappears. Please keep the panic-oriented fuzz target, but add a small table test (or exact-seed assertions) that expects success for the known-good cases.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/mitre/parser_fuzz_test.go` around lines 11 - 120, The fuzz test only ensures no panic but loses regression signal for the curated valid seeds; add a small table-style unit test alongside the fuzz target that iterates the same curated seed byte slices (the ones passed to f.Add) and calls UnmarshalAndExtractMitreAttackBundle(Enterprise, []Platform{Linux}, data), asserting that it returns no error and returns expected non-nil/valid result where applicable; keep the existing Fuzz function unchanged (panic guarantee) and implement the new test (e.g., TestUnmarshalAndExtractMitreAttackBundle_ValidSeeds) which references the same seed literals so CI will fail if those known-good seeds start producing errors.pkg/grpc/common/authn/servicecerttoken/token_fuzz_test.go (1)
157-171: Let panics escape the fuzz target.
assert.NotPanicsadds avoidable overhead in the hot path and turns a native fuzz failure into an assertion failure. Direct calls already fail the fuzz case ifParseTokenpanics.Suggested change
- assert.NotPanics(t, func() { - _, _ = ParseToken(token, 0) - }) - - assert.NotPanics(t, func() { - _, _ = ParseToken(token, 10*time.Second) - }) - - assert.NotPanics(t, func() { - _, _ = ParseToken(token, 24*time.Hour) - }) + _, _ = ParseToken(token, 0) + _, _ = ParseToken(token, 10*time.Second) + _, _ = ParseToken(token, 24*time.Hour)Also remove the now-unused import at Line 17.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/grpc/common/authn/servicecerttoken/token_fuzz_test.go` around lines 157 - 171, Remove the assert.NotPanics wrappers inside the f.Fuzz target and call ParseToken(token, ...) directly for each maxLeeway (0, 10*time.Second, 24*time.Hour) so panics escape to the fuzzer; also remove the now-unused import of the assert package (github.com/stretchr/testify/assert) so the file compiles cleanly.pkg/k8sutil/k8sobjects/parse_test.go (1)
127-173: Don't lock malformed refs into the success contract.These cases treat
::, empty kind, and empty name as valid, butpkg/k8sutil/k8sobjects/parse.go:11documentsParseRefas parsing<kind>:<group/version>:[<namespace>/]<name>. Keeping them as green-path tests will make future parser hardening look like a regression. Unless this leniency is intentional, I'd move these to fuzz-only coverage or flip them towantErr.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/k8sutil/k8sobjects/parse_test.go` around lines 127 - 173, The tests currently accept malformed refs as successful parses; update the cases "empty parts with colons", "empty kind", "empty name", and "namespace with empty name" in parse_test.go so they no longer assert a valid ObjectRef for ParseRef — either set them to expect an error (wantErr) or move them into fuzz-only coverage; ensure you reference the ParseRef function and ObjectRef type when changing the assertions so the test verifies ParseRef rejects these malformed inputs rather than treating them as green-path successes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@central/endpoints/legacy_spec_fuzz_test.go`:
- Around line 53-57: The nil guard before checking length is redundant: in the
block inside legacy_spec_fuzz_test.go remove the "cfg.Protocols != nil &&" part
and either check "len(cfg.Protocols) > 0" alone or simply range over
"cfg.Protocols" (for _, proto := range cfg.Protocols { _ = proto }) since
len(nil) == 0 and ranging a nil slice is safe; update the conditional/loop
around cfg.Protocols accordingly.
In `@pkg/auth/authproviders/idputil/state_test.go`:
- Around line 381-389: The test currently returns for any non-empty callbackURL
even when err != nil, masking regressions; change the callbackURL branch to
require that err is nil before proceeding: in the block that checks if
callbackURL != "" call require.NoError(t, err) (or assert.Nil(t, err)) then
assert.True(t, strings.HasPrefix(result, AuthorizeRoxctlClientState+"#")) and
finally return; this ensures failures to accept valid localhost callback URLs
are caught (references: callbackURL, err, result, AuthorizeRoxctlClientState).
In `@pkg/booleanpolicy/fuzz_test.go`:
- Around line 34-45: The goroutine launched in each fuzz test (e.g., the one
using keyValueValueRegex and the local done channel) can leak when the timeout
case wins because the sender remains blocked; change the unbuffered done channel
creation to a 1-buffered channel (make(chan bool, 1)) in every fuzz function in
this file so the goroutine's send never blocks and no goroutines are leaked when
the timeout branch fires; update all ~22 occurrences (the done channel
declarations inside each fuzz test) consistently.
- Around line 868-873: The recover block in the deferred anonymous function is
empty which triggers SA9003; update the branch inside defer func() { if r :=
recover(); r != nil { ... } }() to explicitly consume the recovered value (e.g.,
add `_ = r`) or — if the test has access to a *testing.T — log it with
t.Logf("recovered panic: %v", r); modify the anonymous defer in fuzz_test.go
accordingly so the branch is non-empty and the analyzer is satisfied.
---
Nitpick comments:
In `@pkg/booleanpolicy/fuzz_test.go`:
- Around line 855-884: After successfully compiling the regex in FuzzCreateRegex
(the regexp.Compile call), exercise the compiled regex by calling its
MatchString (or Match) method with a simple sample input to ensure it doesn't
panic and behaves as expected; locate the compiled value returned by
regexp.Compile (the variable assigned from Compile) and add a minimal assertion
that re.MatchString("test") or another small, deterministic string runs without
error (returning or logging as appropriate) so the fuzz test covers both
compilation and a basic match path.
In `@pkg/grpc/common/authn/servicecerttoken/token_fuzz_test.go`:
- Around line 157-171: Remove the assert.NotPanics wrappers inside the f.Fuzz
target and call ParseToken(token, ...) directly for each maxLeeway (0,
10*time.Second, 24*time.Hour) so panics escape to the fuzzer; also remove the
now-unused import of the assert package (github.com/stretchr/testify/assert) so
the file compiles cleanly.
In `@pkg/k8sutil/k8sobjects/parse_test.go`:
- Around line 127-173: The tests currently accept malformed refs as successful
parses; update the cases "empty parts with colons", "empty kind", "empty name",
and "namespace with empty name" in parse_test.go so they no longer assert a
valid ObjectRef for ParseRef — either set them to expect an error (wantErr) or
move them into fuzz-only coverage; ensure you reference the ParseRef function
and ObjectRef type when changing the assertions so the test verifies ParseRef
rejects these malformed inputs rather than treating them as green-path
successes.
In `@pkg/mitre/parser_fuzz_test.go`:
- Around line 11-120: The fuzz test only ensures no panic but loses regression
signal for the curated valid seeds; add a small table-style unit test alongside
the fuzz target that iterates the same curated seed byte slices (the ones passed
to f.Add) and calls UnmarshalAndExtractMitreAttackBundle(Enterprise,
[]Platform{Linux}, data), asserting that it returns no error and returns
expected non-nil/valid result where applicable; keep the existing Fuzz function
unchanged (panic guarantee) and implement the new test (e.g.,
TestUnmarshalAndExtractMitreAttackBundle_ValidSeeds) which references the same
seed literals so CI will fail if those known-good seeds start producing errors.
In `@pkg/netutil/endpoint_test.go`:
- Around line 193-213: The fuzz test's success checks are weak and unbounded;
tighten them by (1) rejecting oversized inputs before calling ParseEndpoint
(e.g., cap input length) to improve performance, (2) replace tautological checks
on non-empty zone/port with stronger assertions such as validating port is a
valid numeric port range when present and zone matches expected
percent-encoding/format, and (3) where FormatEndpoint is expected to round-trip,
assert that ParseEndpoint(FormatEndpoint(host, zone, port)) returns the same
host/zone/port for bracketed IPv6 and other formats FormatEndpoint can produce;
make these changes around the ParseEndpoint/FormatEndpoint call sites and the
host/zone/port variables referenced in the diff.
In `@pkg/uuid/uuid_test.go`:
- Line 112: Replace the non-deterministic seed NewV4().String() used in the fuzz
seed f.Add(...) with a deterministic constant UUID string or a parsed UUID
constant (e.g., use a literal like "11111111-1111-1111-1111-111111111111" or
uuid.MustParse on a fixed value) so fuzz runs are reproducible; locate uses of
NewV4() inside f.Add(...) (and the other similar occurrence) and change them to
use the fixed constant instead of generating a random V4 UUID at test runtime.
In `@pkg/x509utils/fuzz_test.go`:
- Around line 167-181: The empty-chain invocation inside the fuzz loop is
redundant and wastes cycles; remove the constant call to
ParseCertificateChain([][]byte{}) from the f.Fuzz callback in
pkg/x509utils/fuzz_test.go and instead add a single non-fuzzed test or seed
corpus entry that calls ParseCertificateChain with an empty slice once. Keep the
fuzz callback (f.Fuzz) to only exercise varying data (the single DER and
multi-copy cases using the fuzzed data) and ensure the standalone test/seed
covers the empty-chain behavior.
In `@roxctl/common/download/parse_filename_fuzz_test.go`:
- Around line 99-103: The test currently treats any headerValue not starting
with the exact string "attachment; filename=" as a failure by asserting err !=
nil; update the check in the fuzz test so it doesn't require a strict prefix
match: remove the branch that expects an error for non-matching prefixes and
instead, when err == nil assert only a safe invariant such as filename != "" (or
other permissive correctness you're already testing) so the fuzz oracle remains
tolerant to improved parsing; locate and change the code around headerValue,
strings.HasPrefix(...), err, filename, and the t.Errorf call to implement this
weaker assertion.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Central YAML (base), Organization UI (inherited)
Review profile: CHILL
Plan: Pro
Run ID: 3e853c73-f994-4275-a3a1-51942186c461
📒 Files selected for processing (23)
central/endpoints/legacy_spec_fuzz_test.gopkg/auth/authproviders/idputil/state_test.gopkg/auth/authproviders/saml/entity_descriptors_fuzz_test.gopkg/binenc/fuzz_test.gopkg/booleanpolicy/fuzz_test.gopkg/cvss/cvssv3/cvss_test.gopkg/cvss/cvssv3/testdata/fuzz/FuzzParseCVSSV3/576241980908ec03pkg/grpc/common/authn/servicecerttoken/token_fuzz_test.gopkg/k8sutil/k8sobjects/parse_fuzz_test.gopkg/k8sutil/k8sobjects/parse_test.gopkg/mitre/parser_fuzz_test.gopkg/net/addr_test.gopkg/net/ip_port_test.gopkg/netutil/endpoint_test.gopkg/netutil/testdata/fuzz/FuzzParseEndpoint/a4057eb55d1db860pkg/postgres/pgconfig/config_test.gopkg/protocompat/time_test.gopkg/search/parser/url_parser_fuzz_test.gopkg/search/parser_test.gopkg/uuid/uuid_test.gopkg/x509utils/fuzz_test.goroxctl/common/download/parse_filename_fuzz_test.goroxctl/common/download/parse_filename_test.go
| if cfg.Protocols != nil && len(cfg.Protocols) > 0 { | ||
| for _, proto := range cfg.Protocols { | ||
| _ = proto // Ensure we can read each protocol string | ||
| } | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n central/endpoints/legacy_spec_fuzz_test.go | head -70 | tail -30Repository: stackrox/stackrox
Length of output: 1405
🏁 Script executed:
# Check if there's lint configuration in the repo
find . -maxdepth 2 -type f \( -name ".golangci.yml" -o -name ".golangci.yaml" -o -name "golangci.yml" \) 2>/dev/null | head -5Repository: stackrox/stackrox
Length of output: 76
🏁 Script executed:
# Search for any staticcheck configuration or comments
rg "staticcheck|S1009" . -t go -A 2 -B 2Repository: stackrox/stackrox
Length of output: 1224
🏁 Script executed:
# Read the golangci configuration
cat .golangci.ymlRepository: stackrox/stackrox
Length of output: 6155
🏁 Script executed:
# Search specifically for S1009 in staticcheck documentation or any references
rg "S1009" . -t go -t yml -t yamlRepository: stackrox/stackrox
Length of output: 88
🏁 Script executed:
# Verify Go semantics: check if len(nilSlice) == 0 is actually true
python3 << 'EOF'
# Go language specification verification
# According to Go spec: For slices, arrays and strings, the indices are in range
# if 0 <= low <= high <= len(a), otherwise they are out of range.
# For arrays or strings, the indices are in range if 0 <= low <= high <= len(a)
# len(nil) is 0 for slices in Go (undefined for nil non-slice types)
print("""
Go Language Specification for len():
- For slices: len(nilSlice) == 0 (defined behavior)
- For arrays: must be non-nil
- For strings: must be non-nil
- For maps: len(nilMap) == 0 (defined behavior)
- For channels: len(nilChan) == 0 (defined behavior)
Therefore: cfg.Protocols != nil check is redundant if cfg.Protocols is a slice
and the intent is just to check if it has elements.
""")
EOFRepository: stackrox/stackrox
Length of output: 445
🏁 Script executed:
# Check the actual type definition of cfg.Protocols
rg "type.*Config.*struct|Protocols" central/endpoints -A 5 -B 2 | head -40Repository: stackrox/stackrox
Length of output: 2680
🏁 Script executed:
# Search for any CI configuration or test results that might show lint failures
find . -maxdepth 3 -type f \( -name "*.yml" -o -name "*.yaml" -o -name "Makefile" -o -name ".github" \) 2>/dev/null | head -10Repository: stackrox/stackrox
Length of output: 442
🏁 Script executed:
# Search for similar patterns to see if redundant nil checks are generally flagged/fixed in codebase
rg "if .+ != nil && len\(" . -t go | head -20Repository: stackrox/stackrox
Length of output: 2375
Remove the redundant nil guard.
In Go, len(nilSlice) safely returns 0, so cfg.Protocols != nil is logically unnecessary before the len() check. While the pattern is defensible as defensive programming, it can be simplified.
Suggested change
- if cfg.Protocols != nil && len(cfg.Protocols) > 0 {
+ if len(cfg.Protocols) > 0 {🧰 Tools
🪛 GitHub Actions: Style
[error] 53-53: golangci-lint (staticcheck) S1009: should omit nil check; len() for nil slices is defined as zero
🪛 GitHub Check: golangci-lint
[failure] 53-53:
S1009: should omit nil check; len() for nil slices is defined as zero (staticcheck)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@central/endpoints/legacy_spec_fuzz_test.go` around lines 53 - 57, The nil
guard before checking length is redundant: in the block inside
legacy_spec_fuzz_test.go remove the "cfg.Protocols != nil &&" part and either
check "len(cfg.Protocols) > 0" alone or simply range over "cfg.Protocols" (for
_, proto := range cfg.Protocols { _ = proto }) since len(nil) == 0 and ranging a
nil slice is safe; update the conditional/loop around cfg.Protocols accordingly.
| // 3. If callbackURL is set, result should be prefixed with AuthorizeRoxctlClientState | ||
| // but only if the URL is valid and localhost | ||
| if callbackURL != "" { | ||
| // May error if URL is invalid or not localhost | ||
| if err == nil { | ||
| assert.True(t, strings.HasPrefix(result, AuthorizeRoxctlClientState+"#")) | ||
| } | ||
| return | ||
| } |
There was a problem hiding this comment.
Callback URL fuzz branch currently masks regressions.
At Line 383, any non-empty callbackURL returns early even when err != nil, so a regression that starts rejecting valid localhost callback URLs can pass unnoticed.
Proposed tightening of assertions
import (
+ "net/url"
"strings"
"testing"
"github.com/stretchr/testify/assert"
)
@@
// 3. If callbackURL is set, result should be prefixed with AuthorizeRoxctlClientState
- // but only if the URL is valid and localhost
+ // only for valid localhost URLs
if callbackURL != "" {
- // May error if URL is invalid or not localhost
- if err == nil {
- assert.True(t, strings.HasPrefix(result, AuthorizeRoxctlClientState+"#"))
- }
+ u, parseErr := url.Parse(callbackURL)
+ isLocalhost := parseErr == nil && u.Hostname() == "localhost"
+ if isLocalhost {
+ assert.NoError(t, err)
+ assert.True(t, strings.HasPrefix(result, AuthorizeRoxctlClientState+"#"))
+ } else {
+ assert.Error(t, err)
+ }
return
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // 3. If callbackURL is set, result should be prefixed with AuthorizeRoxctlClientState | |
| // but only if the URL is valid and localhost | |
| if callbackURL != "" { | |
| // May error if URL is invalid or not localhost | |
| if err == nil { | |
| assert.True(t, strings.HasPrefix(result, AuthorizeRoxctlClientState+"#")) | |
| } | |
| return | |
| } | |
| // 3. If callbackURL is set, result should be prefixed with AuthorizeRoxctlClientState | |
| // only for valid localhost URLs | |
| if callbackURL != "" { | |
| u, parseErr := url.Parse(callbackURL) | |
| isLocalhost := parseErr == nil && u.Hostname() == "localhost" | |
| if isLocalhost { | |
| assert.NoError(t, err) | |
| assert.True(t, strings.HasPrefix(result, AuthorizeRoxctlClientState+"#")) | |
| } else { | |
| assert.Error(t, err) | |
| } | |
| return | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/auth/authproviders/idputil/state_test.go` around lines 381 - 389, The
test currently returns for any non-empty callbackURL even when err != nil,
masking regressions; change the callbackURL branch to require that err is nil
before proceeding: in the block that checks if callbackURL != "" call
require.NoError(t, err) (or assert.Nil(t, err)) then assert.True(t,
strings.HasPrefix(result, AuthorizeRoxctlClientState+"#")) and finally return;
this ensures failures to accept valid localhost callback URLs are caught
(references: callbackURL, err, result, AuthorizeRoxctlClientState).
Description
Add 54 native Go fuzz test functions across 17 packages, covering security-sensitive parsers, network input handling, data encoding, and policy validation regex patterns.
Motivation: StackRox previously had 1 fuzz test. As a security platform processing untrusted input from K8s clusters, IdPs, and user APIs, fuzz testing is essential for hardening input parsers against panics and edge cases.
Bug found:
ParseCVSSV3("CVSS:0")panics withslice bounds out of rangein upstreamcvss3.VectorFromString(). Crasher corpus committed. This is a potential DoS vector.Edge cases documented:
pkg/net: IPv4-mapped IPv6 CIDRs report inconsistent prefix lengthspkg/netutil:FormatEndpoint/ParseEndpointroundtrip doesn't hold for all inputsUser-facing documentation
Testing and quality
Automated testing
How I validated my change
-fuzztime=5sto-fuzztime=10sto verify correct operationpkg/booleanpolicy