Skip to content

test(fuzz): add comprehensive Go fuzz tests#19686

Draft
sthadka wants to merge 20 commits intomasterfrom
feat/fuzz
Draft

test(fuzz): add comprehensive Go fuzz tests#19686
sthadka wants to merge 20 commits intomasterfrom
feat/fuzz

Conversation

@sthadka
Copy link
Copy Markdown
Contributor

@sthadka sthadka commented Mar 30, 2026

Description

Add 54 native Go fuzz test functions across 17 packages, covering security-sensitive parsers, network input handling, data encoding, and policy validation regex patterns.

Motivation: StackRox previously had 1 fuzz test. As a security platform processing untrusted input from K8s clusters, IdPs, and user APIs, fuzz testing is essential for hardening input parsers against panics and edge cases.

Bug found: ParseCVSSV3("CVSS:0") panics with slice bounds out of range in upstream cvss3.VectorFromString(). Crasher corpus committed. This is a potential DoS vector.

Edge cases documented:

  • pkg/net: IPv4-mapped IPv6 CIDRs report inconsistent prefix lengths
  • pkg/netutil: FormatEndpoint/ParseEndpoint roundtrip doesn't hold for all inputs

User-facing documentation

Testing and quality

  • the change is production ready: the change is GA, or otherwise the functionality is gated by a feature flag
  • CI results are inspected

Automated testing

  • added unit tests
  • added e2e tests
  • added regression tests
  • added compatibility tests
  • modified existing tests

How I validated my change

  • All 55 fuzz functions compile and pass seed corpus execution
  • Each fuzz target was run with -fuzztime=5s to -fuzztime=10s to verify correct operation
  • 1 real panic bug found (CVSS), 2 behavioral edge cases documented (net, netutil)
  • No ReDoS found across 22 regex patterns in pkg/booleanpolicy

sthadka added 18 commits March 30, 2026 12:24
Add FuzzParseQuery, FuzzParseQueryForAutocomplete, and
FuzzGetValueAndModifiersFromString to exercise the search query parser
with arbitrary input. These functions handle user-supplied search
strings with modifiers, quoted values, and complex delimiters.

AI-assisted
Add FuzzConvertPEMToDERs, FuzzConvertPEMTox509Certs, and
FuzzParseCertificateChain to test certificate parsing with malformed
PEM/DER data, truncated ASN.1, and non-certificate PEM types.

AI-assisted
Add FuzzParseToken to exercise the service certificate token parser
with arbitrary base64+protobuf input, ensuring no panics on malformed
tokens.

AI-assisted
Add FuzzEntityDescriptorsUnmarshal to test custom XML unmarshaling of
SAML EntityDescriptor/EntitiesDescriptor with arbitrary XML input,
including namespace variations and malformed documents.

AI-assisted
Add FuzzSplitState, FuzzParseClientState, and FuzzAttachStateOrEmpty
to exercise OAuth state parameter parsing with arbitrary strings,
covering mode-specific prefixes and delimiter handling.

AI-assisted
Add FuzzParseEndpoint to test endpoint parsing with IPv4, IPv6, zones,
and edge cases. Includes corpus entry for "[:]". Documents that
FormatEndpoint->ParseEndpoint roundtrip doesn't hold for all inputs
(e.g. bare IPv6 without brackets).

AI-assisted
Add FuzzParseIP, FuzzIPFromBytes, FuzzIPNetworkFromCIDR, and
FuzzParseIPPortPair. Documents known edge case: IPv4-mapped IPv6 CIDRs
(e.g. ::ffff:192.168.0.0/96) report as IPv4 family but retain IPv6
prefix lengths.

AI-assisted
Add FuzzParseCVSSV3 to test CVSS v3 vector string parsing. Discovered
panic bug: ParseCVSSV3("CVSS:0") triggers "slice bounds out of range"
in upstream cvss3.VectorFromString(). Includes crasher corpus entry.
This is a potential DoS vector if untrusted CVSS strings reach this
function.

AI-assisted
Add 22 fuzz tests covering every regex pattern in value_regex.go:
KeyValue, Boolean, String, Integer, ComparatorDecimal, Environment
Variable, Dockerfile, Capabilities, RBAC, PortExposure, IPv4, IPv6,
Severity, KubernetesName, FileOperation, SignatureIntegration,
AuditEvent, KubernetesResource, MountPropagation, SeccompProfile,
and CreateRegex. Tests for ReDoS and panics.

AI-assisted
Add FuzzParseSource to test PostgreSQL connection string parsing with
arbitrary input including special characters in passwords and malformed
key=value pairs.

AI-assisted
Add FuzzDecodeBytesList and FuzzBinaryEncRoundtrip. The roundtrip test
verifies encode-decode consistency: EncodeBytesList(a,b,c) followed by
DecodeBytesList must yield [a,b,c].

AI-assisted
Add FuzzFromString, FuzzUnmarshalBinary, and FuzzUnmarshalText to test
UUID parsing across standard, braced, and URN formats with roundtrip
consistency checks.

AI-assisted
Add FuzzParseRFC3339NanoTimestamp to exercise timestamp parsing with
arbitrary strings including edge cases around timezones, nanosecond
precision, and malformed formats.

AI-assisted
Add FuzzParseRef to test K8s object reference parsing with various
formats, unicode, and edge cases. Add table-driven unit tests
covering valid and invalid inputs.

AI-assisted
Add FuzzUnmarshalMitreAttackBundle and FuzzExtractMitreAttackBundle to
test JSON parsing of external MITRE ATT&CK data with malformed,
truncated, and structurally invalid input.

AI-assisted
Add FuzzParseFilenameFromHeader and unit tests for extracting filenames
from HTTP Content-Disposition headers, covering path traversal
attempts, quoted values, and RFC 5987 encoded filenames.

AI-assisted
Add FuzzParseLegacySpec to test parsing of [type@]endpoint
specifications with arbitrary strings including multiple endpoints,
missing types, and malformed separators.

AI-assisted
Add FuzzParseURLQuery to test HTTP URL parameter parsing into query
objects with arbitrary query strings.

AI-assisted
@openshift-ci
Copy link
Copy Markdown

openshift-ci bot commented Mar 30, 2026

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

@gitguardian
Copy link
Copy Markdown

gitguardian bot commented Mar 30, 2026

⚠️ GitGuardian has uncovered 4 secrets following the scan of your pull request.

Please consider investigating the findings and remediating the incidents. Failure to do so may lead to compromising the associated services or software components.

🔎 Detected hardcoded secrets in your pull request
GitGuardian id GitGuardian status Secret Commit Filename
29359836 Triggered Generic Database Assignment 6dce922 pkg/postgres/pgconfig/config_test.go View secret
29359837 Triggered Generic Password 6dce922 pkg/postgres/pgconfig/config_test.go View secret
29360254 Triggered Generic Password f8c2863 pkg/postgres/pgconfig/config_test.go View secret
29360253 Triggered Generic Database Assignment f8c2863 pkg/postgres/pgconfig/config_test.go View secret
🛠 Guidelines to remediate hardcoded secrets
  1. Understand the implications of revoking this secret by investigating where it is used in your code.
  2. Replace and store your secrets safely. Learn here the best practices.
  3. Revoke and rotate these secrets.
  4. If possible, rewrite git history. Rewriting git history is not a trivial act. You might completely break other contributing developers' workflow and you risk accidentally deleting legitimate data.

To avoid such incidents in the future consider


🦉 GitGuardian detects secrets in your source code to help developers and security teams secure the modern development process. You are seeing this because you or someone else with access to this repository has authorized GitGuardian to scan your pull request.

Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 1 issue, and left some high level feedback:

  • Many of the fuzz targets wrap the code under test in defer/recover or assert.NotPanics, which prevents the Go fuzzing engine from observing real panics and producing crashers; consider removing these panic guards so that panics are surfaced to the fuzzer instead of being treated as test failures.
  • Several fuzz tests assert very strict behavioral or formatting invariants (for example, round‑tripping vectorStr exactly in FuzzParseCVSSV3, or assuming specific non‑empty fields after successful parses) that may not be guaranteed by the APIs and could make the fuzzers brittle against legitimate implementation changes; you may want to relax these expectations to focus on safety properties (no panics, basic validity ranges) rather than exact string or struct equality.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- Many of the fuzz targets wrap the code under test in `defer`/`recover` or `assert.NotPanics`, which prevents the Go fuzzing engine from observing real panics and producing crashers; consider removing these panic guards so that panics are surfaced to the fuzzer instead of being treated as test failures.
- Several fuzz tests assert very strict behavioral or formatting invariants (for example, round‑tripping `vectorStr` exactly in `FuzzParseCVSSV3`, or assuming specific non‑empty fields after successful parses) that may not be guaranteed by the APIs and could make the fuzzers brittle against legitimate implementation changes; you may want to relax these expectations to focus on safety properties (no panics, basic validity ranges) rather than exact string or struct equality.

## Individual Comments

### Comment 1
<location path="pkg/booleanpolicy/fuzz_test.go" line_range="10" />
<code_context>
+)
+
+// FuzzKeyValueRegex tests the keyValueValueRegex for ReDoS and correctness
+func FuzzKeyValueRegex(f *testing.F) {
+	// Seed with known valid values
+	f.Add("a=b")
</code_context>
<issue_to_address>
**suggestion (performance):** Per-input goroutine + 100ms timeout in fuzz tests is likely to be very slow and flaky under fuzzing workloads

Starting a goroutine and `time.After(100 * time.Millisecond)` per fuzz input will make this target very slow and prone to flakes once fuzzing ramps up concurrency. On contended CI machines, even normal inputs can exceed 100ms just due to scheduling, causing spurious “took too long” failures unrelated to actual ReDoS.

Please consider alternatives such as:
- Using a tighter, CPU-time–based guard in benchmarks (outside `testing.F`), or
- Using a reusable `context` + matcher wrapper with a very small deadline, or
- Moving ReDoS detection into targeted benchmarks and keeping the fuzz target for panics/correctness only.

If you keep a timeout here, at least share a worker goroutine per fuzz instance (not per input) and/or relax the timeout to reduce flakiness and resource usage.

Suggested implementation:

```golang
	"regexp"
	"testing"

```

From the partial file, I can’t see the full body of `FuzzKeyValueRegex`, but your comment indicates it currently does something like:

- For each fuzz input, starts a new goroutine.
- Uses `time.After(100 * time.Millisecond)` or similar to enforce a wall-clock timeout to detect potential ReDoS.
- Probably calls `keyValueValueRegex.MatchString` (or similar) inside that goroutine.

To implement your suggestion cleanly, I recommend:

1. **Keep the fuzz test focused on correctness / panics only:**
   - Rewrite `FuzzKeyValueRegex` so that it:
     - Seeds with your known values (as already shown).
     - In `f.Fuzz(func(t *testing.T, s string) { ... })`, just runs the regex match directly:
       ```go
       func FuzzKeyValueRegex(f *testing.F) {
           // Seed with known valid values
           f.Add("a=b")
           f.Add("key=value")
           f.Add("1=1")
           f.Add(`.*\d=.*`)

           // Seed with known invalid values
           f.Add("")
           f.Add("=")
           f.Add("=a=b")
           f.Add("no_equals")

           f.Fuzz(func(t *testing.T, s string) {
               // core operation: must not panic
               _ = keyValueValueRegex.MatchString(s)
           })
       }
       ```
     - Remove any goroutine + `time.After` logic from the fuzz function.

2. **Move ReDoS/speed checks into benchmarks:**
   - In the same file, add a benchmark that explicitly probes potentially pathological inputs and measures performance, without per-input goroutine overhead:
     ```go
     func BenchmarkKeyValueRegexReDoS(b *testing.B) {
         candidates := []string{
             "a=b",
             "key=value",
             "1=1",
             strings.Repeat("a", 1<<10) + "=" + strings.Repeat("b", 1<<10),
             // Add any known-evil patterns you’re worried about here.
         }

         b.ResetTimer()
         for i := 0; i < b.N; i++ {
             s := candidates[i%len(candidates)]
             if !keyValueValueRegex.MatchString(s) {
                 // We don't care about correctness here, just avoid compiler
                 // optimizing away the call.
                 b.Fatalf("unexpected no-match for %q", s)
             }
         }
     }
     ```
   - This moves performance / ReDoS detection out of fuzzing and into benchmarks as suggested.

3. **If you decide to keep a timeout-based guard anyway:**
   - Instead of per-input goroutines, you can:
     - Use a single worker goroutine per fuzz instance that processes inputs sent over a channel.
     - Inside the worker, apply a *very small* per-input deadline using a `context.Context` with `context.WithTimeout`, but **not** per-fuzz-input goroutine creation around `MatchString`.
     - However, this is more complex and still susceptible to flakiness; the benchmark approach above is simpler and more robust.

Because I only see the beginning of `FuzzKeyValueRegex`, the maintainer will need to:
- Remove the existing goroutine/timeout logic inside `FuzzKeyValueRegex` and replace it with the simpler direct `MatchString` call as shown.
- Add the new `BenchmarkKeyValueRegexReDoS` (and import `strings` if you use the example benchmark).
- Ensure any now-unused imports (like `time`) are removed; I’ve provided an edit for `time`, but do the same for any others.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

)

// FuzzKeyValueRegex tests the keyValueValueRegex for ReDoS and correctness
func FuzzKeyValueRegex(f *testing.F) {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (performance): Per-input goroutine + 100ms timeout in fuzz tests is likely to be very slow and flaky under fuzzing workloads

Starting a goroutine and time.After(100 * time.Millisecond) per fuzz input will make this target very slow and prone to flakes once fuzzing ramps up concurrency. On contended CI machines, even normal inputs can exceed 100ms just due to scheduling, causing spurious “took too long” failures unrelated to actual ReDoS.

Please consider alternatives such as:

  • Using a tighter, CPU-time–based guard in benchmarks (outside testing.F), or
  • Using a reusable context + matcher wrapper with a very small deadline, or
  • Moving ReDoS detection into targeted benchmarks and keeping the fuzz target for panics/correctness only.

If you keep a timeout here, at least share a worker goroutine per fuzz instance (not per input) and/or relax the timeout to reduce flakiness and resource usage.

Suggested implementation:

	"regexp"
	"testing"

From the partial file, I can’t see the full body of FuzzKeyValueRegex, but your comment indicates it currently does something like:

  • For each fuzz input, starts a new goroutine.
  • Uses time.After(100 * time.Millisecond) or similar to enforce a wall-clock timeout to detect potential ReDoS.
  • Probably calls keyValueValueRegex.MatchString (or similar) inside that goroutine.

To implement your suggestion cleanly, I recommend:

  1. Keep the fuzz test focused on correctness / panics only:

    • Rewrite FuzzKeyValueRegex so that it:
      • Seeds with your known values (as already shown).
      • In f.Fuzz(func(t *testing.T, s string) { ... }), just runs the regex match directly:
        func FuzzKeyValueRegex(f *testing.F) {
            // Seed with known valid values
            f.Add("a=b")
            f.Add("key=value")
            f.Add("1=1")
            f.Add(`.*\d=.*`)
        
            // Seed with known invalid values
            f.Add("")
            f.Add("=")
            f.Add("=a=b")
            f.Add("no_equals")
        
            f.Fuzz(func(t *testing.T, s string) {
                // core operation: must not panic
                _ = keyValueValueRegex.MatchString(s)
            })
        }
      • Remove any goroutine + time.After logic from the fuzz function.
  2. Move ReDoS/speed checks into benchmarks:

    • In the same file, add a benchmark that explicitly probes potentially pathological inputs and measures performance, without per-input goroutine overhead:
      func BenchmarkKeyValueRegexReDoS(b *testing.B) {
          candidates := []string{
              "a=b",
              "key=value",
              "1=1",
              strings.Repeat("a", 1<<10) + "=" + strings.Repeat("b", 1<<10),
              // Add any known-evil patterns you’re worried about here.
          }
      
          b.ResetTimer()
          for i := 0; i < b.N; i++ {
              s := candidates[i%len(candidates)]
              if !keyValueValueRegex.MatchString(s) {
                  // We don't care about correctness here, just avoid compiler
                  // optimizing away the call.
                  b.Fatalf("unexpected no-match for %q", s)
              }
          }
      }
    • This moves performance / ReDoS detection out of fuzzing and into benchmarks as suggested.
  3. If you decide to keep a timeout-based guard anyway:

    • Instead of per-input goroutines, you can:
      • Use a single worker goroutine per fuzz instance that processes inputs sent over a channel.
      • Inside the worker, apply a very small per-input deadline using a context.Context with context.WithTimeout, but not per-fuzz-input goroutine creation around MatchString.
      • However, this is more complex and still susceptible to flakiness; the benchmark approach above is simpler and more robust.

Because I only see the beginning of FuzzKeyValueRegex, the maintainer will need to:

  • Remove the existing goroutine/timeout logic inside FuzzKeyValueRegex and replace it with the simpler direct MatchString call as shown.
  • Add the new BenchmarkKeyValueRegexReDoS (and import strings if you use the example benchmark).
  • Ensure any now-unused imports (like time) are removed; I’ve provided an edit for time, but do the same for any others.

@rhacs-bot
Copy link
Copy Markdown
Contributor

rhacs-bot commented Mar 30, 2026

Images are ready for the commit at f8c2863.

To use with deploy scripts, first export MAIN_IMAGE_TAG=4.11.x-493-gf8c286327b.

sthadka added 2 commits March 30, 2026 15:17
- Remove defer/recover wrappers from all fuzz tests so the Go fuzzing
  engine can observe panics and produce crasher corpus entries. The
  CVSS test retains its recover since the upstream panic is known.
- Remove goroutine+timeout ReDoS detection from all 22 booleanpolicy
  fuzz tests. Go's regexp uses a linear-time engine making ReDoS
  impossible; the timeout added flakiness for zero value.
- Remove strict invariant assertions (roundtrip equality, field
  non-emptiness, modifier range checks) that made tests brittle.
  Fuzz tests now focus on the safety property: no panics.
- Remove unused imports (time, assert, require) where applicable.

Addresses feedback from sourcery-ai review on PR #19686.

AI-assisted
Replace password=secret and password=p@ssw0rd! with clearly fake
values (testonly, t3st!) in pgconfig test fixtures to avoid
secret detection false positives.

AI-assisted
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 30, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Central YAML (base), Organization UI (inherited)

Review profile: CHILL

Plan: Pro

Run ID: 1e8c14ea-9708-4f9d-9dcd-d1899ada99b4

📥 Commits

Reviewing files that changed from the base of the PR and between 7499bcb and f8c2863.

📒 Files selected for processing (6)
  • pkg/booleanpolicy/fuzz_test.go
  • pkg/net/addr_test.go
  • pkg/net/ip_port_test.go
  • pkg/postgres/pgconfig/config_test.go
  • pkg/search/parser_test.go
  • pkg/uuid/uuid_test.go
✅ Files skipped from review due to trivial changes (2)
  • pkg/net/addr_test.go
  • pkg/search/parser_test.go
🚧 Files skipped from review as they are similar to previous changes (3)
  • pkg/uuid/uuid_test.go
  • pkg/net/ip_port_test.go
  • pkg/booleanpolicy/fuzz_test.go

📝 Walkthrough

Summary by CodeRabbit

  • Tests
    • Added extensive fuzz testing across many packages to improve robustness against malformed and edge-case inputs.
    • Added new unit tests validating parsing, parsing round-trips, and error handling for several configuration and input-parsing routines.
    • Tests emphasize panic-safety, basic invariants, and round-trip consistency to reduce regressions.

Walkthrough

Adds extensive fuzz tests and targeted unit tests across multiple packages to exercise parsing, unmarshaling, encoding, and conversion routines against valid, malformed, and edge-case inputs, primarily to ensure functions do not panic and to exercise returned structures.

Changes

Cohort / File(s) Summary
Authentication & SAML
pkg/auth/authproviders/idputil/state_test.go, pkg/auth/authproviders/saml/entity_descriptors_fuzz_test.go
Added unit and fuzz tests for state helpers and SAML entity-descriptor XML unmarshaling; seeds include valid, edge-case, and malformed inputs; tests assert invariants and panic-safety.
Service Certificate Token
pkg/grpc/common/authn/servicecerttoken/token_fuzz_test.go
Fuzz test for ParseToken with diverse seed tokens (valid cert-based token, malformed base64/proto, corrupted bytes) to ensure panic-safety across leeway values.
Binary Encoding & UUIDs
pkg/binenc/fuzz_test.go, pkg/uuid/uuid_test.go
Fuzz and round-trip tests for byte-list encoding/decoding and UUID string/binary/text marshal/unmarshal with seeded valid and malformed inputs.
X.509 / Certificate Utilities
pkg/x509utils/fuzz_test.go
Fuzz tests for PEM<->DER conversions and certificate chain parsing using valid fixtures and many malformed cases; focused on panic-safety.
Boolean Policy Regexes
pkg/booleanpolicy/fuzz_test.go
Added many fuzz targets exercising exported regex matchers and dynamic regex compilation with varied pattern seeds.
CVSS Parsing
pkg/cvss/cvssv3/cvss_test.go, pkg/cvss/cvssv3/testdata/fuzz/FuzzParseCVSSV3/576241980908ec03
Fuzzer for CVSS v3 vector parsing with seeded valid/invalid vectors and a minimal corpus entry (CVSS:0); asserts parsing/score invariants on success and panic-safety.
Kubernetes & Legacy Spec Parsing
central/endpoints/legacy_spec_fuzz_test.go, pkg/k8sutil/k8sobjects/parse_fuzz_test.go, pkg/k8sutil/k8sobjects/parse_test.go
Added fuzz test for legacy spec parsing and unit + fuzz tests for ParseRef covering namespaced/cluster-scoped/custom-resource forms and many malformed inputs.
MITRE ATT&CK Parsing
pkg/mitre/parser_fuzz_test.go
Two fuzz tests for unmarshaling and extracting MITRE ATT&CK bundles from JSON, covering minimal valid bundles and many malformed/edge-case shapes; ensures panic-safety.
Networking & Endpoints
pkg/net/addr_test.go, pkg/net/ip_port_test.go, pkg/netutil/endpoint_test.go, pkg/netutil/testdata/fuzz/FuzzParseEndpoint/a4057eb55d1db860
Fuzzers for IP parsing, IP:port pair parsing, and endpoint parsing (including bracketed IPv6/zone forms) with corpus seed ([:]); tests exercise accessors and ensure no panics.
Postgres Config Parsing
pkg/postgres/pgconfig/config_test.go
Table-driven tests and a fuzzer for ParseSource covering typical key=value strings, passwords with =, whitespace handling, and malformed inputs; iterates returned map to ensure safe access.
Time / Protobuf Compatibility
pkg/protocompat/time_test.go
Fuzz test for parsing RFC3339Nano timestamps with round-trip validation and CheckValid assertions on successful parses.
Search & URL Query Parsing
pkg/search/parser/url_parser_fuzz_test.go, pkg/search/parser_test.go
Fuzz tests for URL query parsing and search query parsing functions; skip unparseable inputs and ensure no panics.
Binary/Parser Utilities
pkg/netutil/..., pkg/binenc/..., pkg/search/...
Miscellaneous fuzz additions across parsing/encoding utilities to increase robustness checks (see files above for specifics).
Download Filename Parsing
roxctl/common/download/parse_filename_fuzz_test.go, roxctl/common/download/parse_filename_test.go
Unit and fuzz tests for Content-Disposition filename extraction (including RFC2231 and many malformed cases) with assertions on error types and returned filename safety (no surrounding quotes).

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 26.87% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title 'test(fuzz): add comprehensive Go fuzz tests' is clear, concise, and accurately describes the primary change: adding fuzz testing capabilities across the codebase.
Description check ✅ Passed The description provides comprehensive details about the PR objectives, motivation, findings, and validation approach; it follows the template structure with required sections completed including CHANGELOG confirmation, automated testing checkboxes, and detailed validation notes.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/fuzz

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (8)
pkg/x509utils/fuzz_test.go (1)

167-181: Minor: Redundant empty chain test inside fuzz callback.

Line 177 calls ParseCertificateChain([][]byte{}) on every fuzz iteration with identical input. This doesn't exercise varying fuzzed data and adds unnecessary overhead. Consider moving this constant test to seed corpus or test it once outside the fuzz loop.

Suggested refactor
 	f.Fuzz(func(t *testing.T, data []byte) {
 		// Test with single DER certificate
 		chain := [][]byte{data}
 		_, _ = ParseCertificateChain(chain)

 		// Test with multiple copies of the same data
 		multiChain := [][]byte{data, data, data}
 		_, _ = ParseCertificateChain(multiChain)
-
-		// Test with empty chain
-		_, _ = ParseCertificateChain([][]byte{})
-
 		// Note: We don't assert on errors since malformed input is expected to error
 		// The key is that the function should never panic
 	})
+
+	// Test empty chain once (constant input, no need to repeat in fuzz loop)
+	_, _ = ParseCertificateChain([][]byte{})
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/x509utils/fuzz_test.go` around lines 167 - 181, The empty-chain
invocation inside the fuzz loop is redundant and wastes cycles; remove the
constant call to ParseCertificateChain([][]byte{}) from the f.Fuzz callback in
pkg/x509utils/fuzz_test.go and instead add a single non-fuzzed test or seed
corpus entry that calls ParseCertificateChain with an empty slice once. Keep the
fuzz callback (f.Fuzz) to only exercise varying data (the single DER and
multi-copy cases using the fuzzed data) and ensure the standalone test/seed
covers the empty-chain behavior.
pkg/booleanpolicy/fuzz_test.go (1)

855-884: Consider adding a basic match test after successful compilation.

FuzzCreateRegex only tests compilation but doesn't exercise the compiled regex with any input. A minimal match test would increase coverage:

Optional enhancement
 		// Attempt to compile the pattern
-		_, err := regexp.Compile("((?m)^" + pattern + "$)")
+		re, err := regexp.Compile("((?m)^" + pattern + "$)")
 		if err != nil {
 			// Invalid pattern is expected for fuzz inputs
 			return
 		}
 
-		// If compilation succeeds, the test passes
+		// Basic exercise of the compiled regex
+		_ = re.MatchString("test")
 	})
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/booleanpolicy/fuzz_test.go` around lines 855 - 884, After successfully
compiling the regex in FuzzCreateRegex (the regexp.Compile call), exercise the
compiled regex by calling its MatchString (or Match) method with a simple sample
input to ensure it doesn't panic and behaves as expected; locate the compiled
value returned by regexp.Compile (the variable assigned from Compile) and add a
minimal assertion that re.MatchString("test") or another small, deterministic
string runs without error (returning or logging as appropriate) so the fuzz test
covers both compilation and a basic match path.
roxctl/common/download/parse_filename_fuzz_test.go (1)

99-103: Avoid locking the fuzz oracle to current strict prefix parsing.

At Line 99-Line 103, requiring an error for anything not starting with exact attachment; filename= makes the fuzz test brittle if parser behavior is legitimately improved (e.g., tolerant casing/spacing). Prefer safety-oriented invariants here and keep exact-format expectations in unit tests.

♻️ Suggested adjustment
-		// Verify the expected prefix handling
-		if headerValue != "" && !strings.HasPrefix(headerValue, "attachment; filename=") {
-			if err == nil {
-				t.Errorf("expected error for header %q without expected prefix, got filename %q", headerValue, filename)
-			}
-		}
+		// Keep fuzz assertions focused on safety/type invariants.
+		// Exact header-shape behavior is better enforced in table-driven unit tests.

As per coding guidelines, "Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@roxctl/common/download/parse_filename_fuzz_test.go` around lines 99 - 103,
The test currently treats any headerValue not starting with the exact string
"attachment; filename=" as a failure by asserting err != nil; update the check
in the fuzz test so it doesn't require a strict prefix match: remove the branch
that expects an error for non-matching prefixes and instead, when err == nil
assert only a safe invariant such as filename != "" (or other permissive
correctness you're already testing) so the fuzz oracle remains tolerant to
improved parsing; locate and change the code around headerValue,
strings.HasPrefix(...), err, filename, and the t.Errorf call to implement this
weaker assertion.
pkg/uuid/uuid_test.go (1)

112-112: Use deterministic seed values in fuzz tests.

Using NewV4() in f.Add(...) makes seed inputs vary per run, which hurts reproducibility and triage consistency for fuzz failures.

♻️ Proposed fix
 func FuzzFromString(f *testing.F) {
@@
-	f.Add(NewV4().String())                                  // random valid UUID
+	f.Add("123e4567-e89b-12d3-a456-426614174000")           // fixed valid UUID
@@
 func FuzzUnmarshalBinary(f *testing.F) {
@@
-	f.Add(NewV4().Bytes())   // random UUID
+	f.Add([]byte{0x12, 0x3e, 0x45, 0x67, 0xe8, 0x9b, 0x12, 0xd3, 0xa4, 0x56, 0x42, 0x66, 0x14, 0x17, 0x40, 0x00}) // fixed UUID bytes

As per coding guidelines, "**: Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."

Also applies to: 148-148

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/uuid/uuid_test.go` at line 112, Replace the non-deterministic seed
NewV4().String() used in the fuzz seed f.Add(...) with a deterministic constant
UUID string or a parsed UUID constant (e.g., use a literal like
"11111111-1111-1111-1111-111111111111" or uuid.MustParse on a fixed value) so
fuzz runs are reproducible; locate uses of NewV4() inside f.Add(...) (and the
other similar occurrence) and change them to use the fixed constant instead of
generating a random V4 UUID at test runtime.
pkg/netutil/endpoint_test.go (1)

193-213: Strengthen fuzz oracle and bound input size for better signal/perf.

The current zone/port checks are tautological, and the success path only checks “no panic.” This can miss parser regressions while spending cycles on oversized mutations.

♻️ Proposed refactor
 f.Fuzz(func(t *testing.T, endpoint string) {
+    if len(endpoint) > 4096 {
+        t.Skip()
+    }
+
     // The primary goal is to ensure ParseEndpoint never panics,
     // regardless of input. We don't assert correctness of the parse,
     // just that it either succeeds or returns an error gracefully.
     host, zone, port, err := ParseEndpoint(endpoint)

     // If parsing succeeded, verify basic invariants
     if err == nil {
         // Host must not be empty on success
         assert.NotEmpty(t, host, "host should not be empty when parsing succeeds")

-        // If zone is present, it must not be empty
-        if zone != "" {
-            assert.NotEmpty(t, zone, "zone should not be empty if present")
-        }
-
-        // If port is present, it must not be empty
-        if port != "" {
-            assert.NotEmpty(t, port, "port should not be empty if present")
-        }
-
-        // NOTE: Round-trip (FormatEndpoint -> ParseEndpoint) doesn't always
-        // hold because ParseEndpoint accepts formats that FormatEndpoint
-        // doesn't produce (e.g. bare IPv6 without brackets). We only
-        // verify FormatEndpoint doesn't panic.
-        _ = FormatEndpoint(host, zone, port)
+        // Canonical round-trip oracle: parse -> format -> parse.
+        formatted := FormatEndpoint(host, zone, port)
+        rtHost, rtZone, rtPort, rtErr := ParseEndpoint(formatted)
+        assert.NoError(t, rtErr)
+        assert.Equal(t, host, rtHost)
+        assert.Equal(t, zone, rtZone)
+        assert.Equal(t, port, rtPort)
     }
 })

As per coding guidelines, "Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/netutil/endpoint_test.go` around lines 193 - 213, The fuzz test's success
checks are weak and unbounded; tighten them by (1) rejecting oversized inputs
before calling ParseEndpoint (e.g., cap input length) to improve performance,
(2) replace tautological checks on non-empty zone/port with stronger assertions
such as validating port is a valid numeric port range when present and zone
matches expected percent-encoding/format, and (3) where FormatEndpoint is
expected to round-trip, assert that ParseEndpoint(FormatEndpoint(host, zone,
port)) returns the same host/zone/port for bracketed IPv6 and other formats
FormatEndpoint can produce; make these changes around the
ParseEndpoint/FormatEndpoint call sites and the host/zone/port variables
referenced in the diff.
pkg/mitre/parser_fuzz_test.go (1)

11-120: Preserve the curated “valid” seeds with a semantic assertion.

These seeds are described as minimal valid bundles, but the target only checks that parsing does not panic. If UnmarshalAndExtractMitreAttackBundle starts returning an error for every valid seed, this fuzz test still stays green and the useful regression signal disappears. Please keep the panic-oriented fuzz target, but add a small table test (or exact-seed assertions) that expects success for the known-good cases.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/mitre/parser_fuzz_test.go` around lines 11 - 120, The fuzz test only
ensures no panic but loses regression signal for the curated valid seeds; add a
small table-style unit test alongside the fuzz target that iterates the same
curated seed byte slices (the ones passed to f.Add) and calls
UnmarshalAndExtractMitreAttackBundle(Enterprise, []Platform{Linux}, data),
asserting that it returns no error and returns expected non-nil/valid result
where applicable; keep the existing Fuzz function unchanged (panic guarantee)
and implement the new test (e.g.,
TestUnmarshalAndExtractMitreAttackBundle_ValidSeeds) which references the same
seed literals so CI will fail if those known-good seeds start producing errors.
pkg/grpc/common/authn/servicecerttoken/token_fuzz_test.go (1)

157-171: Let panics escape the fuzz target.

assert.NotPanics adds avoidable overhead in the hot path and turns a native fuzz failure into an assertion failure. Direct calls already fail the fuzz case if ParseToken panics.

Suggested change
-		assert.NotPanics(t, func() {
-			_, _ = ParseToken(token, 0)
-		})
-
-		assert.NotPanics(t, func() {
-			_, _ = ParseToken(token, 10*time.Second)
-		})
-
-		assert.NotPanics(t, func() {
-			_, _ = ParseToken(token, 24*time.Hour)
-		})
+		_, _ = ParseToken(token, 0)
+		_, _ = ParseToken(token, 10*time.Second)
+		_, _ = ParseToken(token, 24*time.Hour)

Also remove the now-unused import at Line 17.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/grpc/common/authn/servicecerttoken/token_fuzz_test.go` around lines 157 -
171, Remove the assert.NotPanics wrappers inside the f.Fuzz target and call
ParseToken(token, ...) directly for each maxLeeway (0, 10*time.Second,
24*time.Hour) so panics escape to the fuzzer; also remove the now-unused import
of the assert package (github.com/stretchr/testify/assert) so the file compiles
cleanly.
pkg/k8sutil/k8sobjects/parse_test.go (1)

127-173: Don't lock malformed refs into the success contract.

These cases treat ::, empty kind, and empty name as valid, but pkg/k8sutil/k8sobjects/parse.go:11 documents ParseRef as parsing <kind>:<group/version>:[<namespace>/]<name>. Keeping them as green-path tests will make future parser hardening look like a regression. Unless this leniency is intentional, I'd move these to fuzz-only coverage or flip them to wantErr.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/k8sutil/k8sobjects/parse_test.go` around lines 127 - 173, The tests
currently accept malformed refs as successful parses; update the cases "empty
parts with colons", "empty kind", "empty name", and "namespace with empty name"
in parse_test.go so they no longer assert a valid ObjectRef for ParseRef —
either set them to expect an error (wantErr) or move them into fuzz-only
coverage; ensure you reference the ParseRef function and ObjectRef type when
changing the assertions so the test verifies ParseRef rejects these malformed
inputs rather than treating them as green-path successes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@central/endpoints/legacy_spec_fuzz_test.go`:
- Around line 53-57: The nil guard before checking length is redundant: in the
block inside legacy_spec_fuzz_test.go remove the "cfg.Protocols != nil &&" part
and either check "len(cfg.Protocols) > 0" alone or simply range over
"cfg.Protocols" (for _, proto := range cfg.Protocols { _ = proto }) since
len(nil) == 0 and ranging a nil slice is safe; update the conditional/loop
around cfg.Protocols accordingly.

In `@pkg/auth/authproviders/idputil/state_test.go`:
- Around line 381-389: The test currently returns for any non-empty callbackURL
even when err != nil, masking regressions; change the callbackURL branch to
require that err is nil before proceeding: in the block that checks if
callbackURL != "" call require.NoError(t, err) (or assert.Nil(t, err)) then
assert.True(t, strings.HasPrefix(result, AuthorizeRoxctlClientState+"#")) and
finally return; this ensures failures to accept valid localhost callback URLs
are caught (references: callbackURL, err, result, AuthorizeRoxctlClientState).

In `@pkg/booleanpolicy/fuzz_test.go`:
- Around line 34-45: The goroutine launched in each fuzz test (e.g., the one
using keyValueValueRegex and the local done channel) can leak when the timeout
case wins because the sender remains blocked; change the unbuffered done channel
creation to a 1-buffered channel (make(chan bool, 1)) in every fuzz function in
this file so the goroutine's send never blocks and no goroutines are leaked when
the timeout branch fires; update all ~22 occurrences (the done channel
declarations inside each fuzz test) consistently.
- Around line 868-873: The recover block in the deferred anonymous function is
empty which triggers SA9003; update the branch inside defer func() { if r :=
recover(); r != nil { ... } }() to explicitly consume the recovered value (e.g.,
add `_ = r`) or — if the test has access to a *testing.T — log it with
t.Logf("recovered panic: %v", r); modify the anonymous defer in fuzz_test.go
accordingly so the branch is non-empty and the analyzer is satisfied.

---

Nitpick comments:
In `@pkg/booleanpolicy/fuzz_test.go`:
- Around line 855-884: After successfully compiling the regex in FuzzCreateRegex
(the regexp.Compile call), exercise the compiled regex by calling its
MatchString (or Match) method with a simple sample input to ensure it doesn't
panic and behaves as expected; locate the compiled value returned by
regexp.Compile (the variable assigned from Compile) and add a minimal assertion
that re.MatchString("test") or another small, deterministic string runs without
error (returning or logging as appropriate) so the fuzz test covers both
compilation and a basic match path.

In `@pkg/grpc/common/authn/servicecerttoken/token_fuzz_test.go`:
- Around line 157-171: Remove the assert.NotPanics wrappers inside the f.Fuzz
target and call ParseToken(token, ...) directly for each maxLeeway (0,
10*time.Second, 24*time.Hour) so panics escape to the fuzzer; also remove the
now-unused import of the assert package (github.com/stretchr/testify/assert) so
the file compiles cleanly.

In `@pkg/k8sutil/k8sobjects/parse_test.go`:
- Around line 127-173: The tests currently accept malformed refs as successful
parses; update the cases "empty parts with colons", "empty kind", "empty name",
and "namespace with empty name" in parse_test.go so they no longer assert a
valid ObjectRef for ParseRef — either set them to expect an error (wantErr) or
move them into fuzz-only coverage; ensure you reference the ParseRef function
and ObjectRef type when changing the assertions so the test verifies ParseRef
rejects these malformed inputs rather than treating them as green-path
successes.

In `@pkg/mitre/parser_fuzz_test.go`:
- Around line 11-120: The fuzz test only ensures no panic but loses regression
signal for the curated valid seeds; add a small table-style unit test alongside
the fuzz target that iterates the same curated seed byte slices (the ones passed
to f.Add) and calls UnmarshalAndExtractMitreAttackBundle(Enterprise,
[]Platform{Linux}, data), asserting that it returns no error and returns
expected non-nil/valid result where applicable; keep the existing Fuzz function
unchanged (panic guarantee) and implement the new test (e.g.,
TestUnmarshalAndExtractMitreAttackBundle_ValidSeeds) which references the same
seed literals so CI will fail if those known-good seeds start producing errors.

In `@pkg/netutil/endpoint_test.go`:
- Around line 193-213: The fuzz test's success checks are weak and unbounded;
tighten them by (1) rejecting oversized inputs before calling ParseEndpoint
(e.g., cap input length) to improve performance, (2) replace tautological checks
on non-empty zone/port with stronger assertions such as validating port is a
valid numeric port range when present and zone matches expected
percent-encoding/format, and (3) where FormatEndpoint is expected to round-trip,
assert that ParseEndpoint(FormatEndpoint(host, zone, port)) returns the same
host/zone/port for bracketed IPv6 and other formats FormatEndpoint can produce;
make these changes around the ParseEndpoint/FormatEndpoint call sites and the
host/zone/port variables referenced in the diff.

In `@pkg/uuid/uuid_test.go`:
- Line 112: Replace the non-deterministic seed NewV4().String() used in the fuzz
seed f.Add(...) with a deterministic constant UUID string or a parsed UUID
constant (e.g., use a literal like "11111111-1111-1111-1111-111111111111" or
uuid.MustParse on a fixed value) so fuzz runs are reproducible; locate uses of
NewV4() inside f.Add(...) (and the other similar occurrence) and change them to
use the fixed constant instead of generating a random V4 UUID at test runtime.

In `@pkg/x509utils/fuzz_test.go`:
- Around line 167-181: The empty-chain invocation inside the fuzz loop is
redundant and wastes cycles; remove the constant call to
ParseCertificateChain([][]byte{}) from the f.Fuzz callback in
pkg/x509utils/fuzz_test.go and instead add a single non-fuzzed test or seed
corpus entry that calls ParseCertificateChain with an empty slice once. Keep the
fuzz callback (f.Fuzz) to only exercise varying data (the single DER and
multi-copy cases using the fuzzed data) and ensure the standalone test/seed
covers the empty-chain behavior.

In `@roxctl/common/download/parse_filename_fuzz_test.go`:
- Around line 99-103: The test currently treats any headerValue not starting
with the exact string "attachment; filename=" as a failure by asserting err !=
nil; update the check in the fuzz test so it doesn't require a strict prefix
match: remove the branch that expects an error for non-matching prefixes and
instead, when err == nil assert only a safe invariant such as filename != "" (or
other permissive correctness you're already testing) so the fuzz oracle remains
tolerant to improved parsing; locate and change the code around headerValue,
strings.HasPrefix(...), err, filename, and the t.Errorf call to implement this
weaker assertion.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Central YAML (base), Organization UI (inherited)

Review profile: CHILL

Plan: Pro

Run ID: 3e853c73-f994-4275-a3a1-51942186c461

📥 Commits

Reviewing files that changed from the base of the PR and between 6b81aa1 and 7499bcb.

📒 Files selected for processing (23)
  • central/endpoints/legacy_spec_fuzz_test.go
  • pkg/auth/authproviders/idputil/state_test.go
  • pkg/auth/authproviders/saml/entity_descriptors_fuzz_test.go
  • pkg/binenc/fuzz_test.go
  • pkg/booleanpolicy/fuzz_test.go
  • pkg/cvss/cvssv3/cvss_test.go
  • pkg/cvss/cvssv3/testdata/fuzz/FuzzParseCVSSV3/576241980908ec03
  • pkg/grpc/common/authn/servicecerttoken/token_fuzz_test.go
  • pkg/k8sutil/k8sobjects/parse_fuzz_test.go
  • pkg/k8sutil/k8sobjects/parse_test.go
  • pkg/mitre/parser_fuzz_test.go
  • pkg/net/addr_test.go
  • pkg/net/ip_port_test.go
  • pkg/netutil/endpoint_test.go
  • pkg/netutil/testdata/fuzz/FuzzParseEndpoint/a4057eb55d1db860
  • pkg/postgres/pgconfig/config_test.go
  • pkg/protocompat/time_test.go
  • pkg/search/parser/url_parser_fuzz_test.go
  • pkg/search/parser_test.go
  • pkg/uuid/uuid_test.go
  • pkg/x509utils/fuzz_test.go
  • roxctl/common/download/parse_filename_fuzz_test.go
  • roxctl/common/download/parse_filename_test.go

Comment on lines +53 to +57
if cfg.Protocols != nil && len(cfg.Protocols) > 0 {
for _, proto := range cfg.Protocols {
_ = proto // Ensure we can read each protocol string
}
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

cat -n central/endpoints/legacy_spec_fuzz_test.go | head -70 | tail -30

Repository: stackrox/stackrox

Length of output: 1405


🏁 Script executed:

# Check if there's lint configuration in the repo
find . -maxdepth 2 -type f \( -name ".golangci.yml" -o -name ".golangci.yaml" -o -name "golangci.yml" \) 2>/dev/null | head -5

Repository: stackrox/stackrox

Length of output: 76


🏁 Script executed:

# Search for any staticcheck configuration or comments
rg "staticcheck|S1009" . -t go -A 2 -B 2

Repository: stackrox/stackrox

Length of output: 1224


🏁 Script executed:

# Read the golangci configuration
cat .golangci.yml

Repository: stackrox/stackrox

Length of output: 6155


🏁 Script executed:

# Search specifically for S1009 in staticcheck documentation or any references
rg "S1009" . -t go -t yml -t yaml

Repository: stackrox/stackrox

Length of output: 88


🏁 Script executed:

# Verify Go semantics: check if len(nilSlice) == 0 is actually true
python3 << 'EOF'
# Go language specification verification
# According to Go spec: For slices, arrays and strings, the indices are in range
# if 0 <= low <= high <= len(a), otherwise they are out of range.
# For arrays or strings, the indices are in range if 0 <= low <= high <= len(a)
# len(nil) is 0 for slices in Go (undefined for nil non-slice types)

print("""
Go Language Specification for len():
- For slices: len(nilSlice) == 0 (defined behavior)
- For arrays: must be non-nil
- For strings: must be non-nil
- For maps: len(nilMap) == 0 (defined behavior)
- For channels: len(nilChan) == 0 (defined behavior)

Therefore: cfg.Protocols != nil check is redundant if cfg.Protocols is a slice
and the intent is just to check if it has elements.
""")
EOF

Repository: stackrox/stackrox

Length of output: 445


🏁 Script executed:

# Check the actual type definition of cfg.Protocols
rg "type.*Config.*struct|Protocols" central/endpoints -A 5 -B 2 | head -40

Repository: stackrox/stackrox

Length of output: 2680


🏁 Script executed:

# Search for any CI configuration or test results that might show lint failures
find . -maxdepth 3 -type f \( -name "*.yml" -o -name "*.yaml" -o -name "Makefile" -o -name ".github" \) 2>/dev/null | head -10

Repository: stackrox/stackrox

Length of output: 442


🏁 Script executed:

# Search for similar patterns to see if redundant nil checks are generally flagged/fixed in codebase
rg "if .+ != nil && len\(" . -t go | head -20

Repository: stackrox/stackrox

Length of output: 2375


Remove the redundant nil guard.

In Go, len(nilSlice) safely returns 0, so cfg.Protocols != nil is logically unnecessary before the len() check. While the pattern is defensible as defensive programming, it can be simplified.

Suggested change
-			if cfg.Protocols != nil && len(cfg.Protocols) > 0 {
+			if len(cfg.Protocols) > 0 {
🧰 Tools
🪛 GitHub Actions: Style

[error] 53-53: golangci-lint (staticcheck) S1009: should omit nil check; len() for nil slices is defined as zero

🪛 GitHub Check: golangci-lint

[failure] 53-53:
S1009: should omit nil check; len() for nil slices is defined as zero (staticcheck)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@central/endpoints/legacy_spec_fuzz_test.go` around lines 53 - 57, The nil
guard before checking length is redundant: in the block inside
legacy_spec_fuzz_test.go remove the "cfg.Protocols != nil &&" part and either
check "len(cfg.Protocols) > 0" alone or simply range over "cfg.Protocols" (for
_, proto := range cfg.Protocols { _ = proto }) since len(nil) == 0 and ranging a
nil slice is safe; update the conditional/loop around cfg.Protocols accordingly.

Comment on lines +381 to +389
// 3. If callbackURL is set, result should be prefixed with AuthorizeRoxctlClientState
// but only if the URL is valid and localhost
if callbackURL != "" {
// May error if URL is invalid or not localhost
if err == nil {
assert.True(t, strings.HasPrefix(result, AuthorizeRoxctlClientState+"#"))
}
return
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Callback URL fuzz branch currently masks regressions.

At Line 383, any non-empty callbackURL returns early even when err != nil, so a regression that starts rejecting valid localhost callback URLs can pass unnoticed.

Proposed tightening of assertions
 import (
+	"net/url"
 	"strings"
 	"testing"
 
 	"github.com/stretchr/testify/assert"
 )
@@
 		// 3. If callbackURL is set, result should be prefixed with AuthorizeRoxctlClientState
-		//    but only if the URL is valid and localhost
+		//    only for valid localhost URLs
 		if callbackURL != "" {
-			// May error if URL is invalid or not localhost
-			if err == nil {
-				assert.True(t, strings.HasPrefix(result, AuthorizeRoxctlClientState+"#"))
-			}
+			u, parseErr := url.Parse(callbackURL)
+			isLocalhost := parseErr == nil && u.Hostname() == "localhost"
+			if isLocalhost {
+				assert.NoError(t, err)
+				assert.True(t, strings.HasPrefix(result, AuthorizeRoxctlClientState+"#"))
+			} else {
+				assert.Error(t, err)
+			}
 			return
 		}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// 3. If callbackURL is set, result should be prefixed with AuthorizeRoxctlClientState
// but only if the URL is valid and localhost
if callbackURL != "" {
// May error if URL is invalid or not localhost
if err == nil {
assert.True(t, strings.HasPrefix(result, AuthorizeRoxctlClientState+"#"))
}
return
}
// 3. If callbackURL is set, result should be prefixed with AuthorizeRoxctlClientState
// only for valid localhost URLs
if callbackURL != "" {
u, parseErr := url.Parse(callbackURL)
isLocalhost := parseErr == nil && u.Hostname() == "localhost"
if isLocalhost {
assert.NoError(t, err)
assert.True(t, strings.HasPrefix(result, AuthorizeRoxctlClientState+"#"))
} else {
assert.Error(t, err)
}
return
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/auth/authproviders/idputil/state_test.go` around lines 381 - 389, The
test currently returns for any non-empty callbackURL even when err != nil,
masking regressions; change the callbackURL branch to require that err is nil
before proceeding: in the block that checks if callbackURL != "" call
require.NoError(t, err) (or assert.Nil(t, err)) then assert.True(t,
strings.HasPrefix(result, AuthorizeRoxctlClientState+"#")) and finally return;
this ensures failures to accept valid localhost callback URLs are caught
(references: callbackURL, err, result, AuthorizeRoxctlClientState).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants