Draft
Conversation
|
Skipping CI for Draft Pull Request. |
Contributor
There was a problem hiding this comment.
Hey - I've found 1 issue, and left some high level feedback:
- The Authorization header prefix was changed from
BearertoBearerxxxindoRoundTrip, which appears unintentional and will break authenticated requests; please revert to the correct scheme. - The retry logic unconditionally buffers the entire request body into memory (
io.ReadAll); consider guarding this with a size limit or skipping retries for large/streaming bodies to avoid excessive memory usage.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The Authorization header prefix was changed from `Bearer` to `Bearerxxx` in `doRoundTrip`, which appears unintentional and will break authenticated requests; please revert to the correct scheme.
- The retry logic unconditionally buffers the entire request body into memory (`io.ReadAll`); consider guarding this with a size limit or skipping retries for large/streaming bodies to avoid excessive memory usage.
## Individual Comments
### Comment 1
<location> `sensor/common/centralproxy/transport.go:161` </location>
<code_context>
// Clone the request to avoid modifying the original.
reqCopy := req.Clone(req.Context())
- reqCopy.Header.Set("Authorization", fmt.Sprintf("Bearer %s", token))
+ reqCopy.Header.Set("Authorization", fmt.Sprintf("Bearerxxx %s", token))
return t.base.RoundTrip(reqCopy) //nolint:wrapcheck
</code_context>
<issue_to_address>
**issue (bug_risk):** Authorization header uses a likely incorrect "Bearerxxx" scheme instead of "Bearer".
Using `Bearerxxx` will cause OAuth2 endpoints like Central to reject the token. Unless this is a deliberate temporary override, keep this as `Bearer` so authentication continues to work.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| // Clone the request to avoid modifying the original. | ||
| reqCopy := req.Clone(req.Context()) | ||
| reqCopy.Header.Set("Authorization", fmt.Sprintf("Bearer %s", token)) | ||
| reqCopy.Header.Set("Authorization", fmt.Sprintf("Bearerxxx %s", token)) |
Contributor
There was a problem hiding this comment.
issue (bug_risk): Authorization header uses a likely incorrect "Bearerxxx" scheme instead of "Bearer".
Using Bearerxxx will cause OAuth2 endpoints like Central to reject the token. Unless this is a deliberate temporary override, keep this as Bearer so authentication continues to work.
Contributor
|
Images are ready for the commit at e095b1f. To use with deploy scripts, first |
Replace incorrect `Deployment_STATE_*` with correct `DeploymentState_STATE_*` in files that were not caught during the interactive rebase. Affected files: - central/deployment/datastore/datastore.go - central/deployment/datastore/datastore_impl.go - pkg/protoconv/resources/resources_test.go Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
When a deployment is soft-deleted, it should be immediately removed from all ranking systems to ensure ranking data reflects only active deployments. Changes: - Remove deployment from deploymentRanker when soft-deleted - Update namespace ranker by subtracting the deployment's risk score - Update cluster ranker by subtracting the deployment's risk score This ensures consistency with query filtering (which excludes deleted deployments) and prevents ranking data from including stale deployments. User request: "should soft deleted deployments be immediately removed from the deploymentranker?" Why: Namespace and cluster rankers were not being updated when deployments were soft-deleted, leading to incorrect aggregate risk scores that included deleted deployments. The deployment ranker also retained entries for soft-deleted deployments despite them being filtered from all queries. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Changed the GraphQL deployment loader's CountAll method to count deployments across all states (active and soft-deleted) instead of only active deployments. Rationale: The name "CountAll" semantically implies counting everything, while CountFromQuery is available for filtered counts. This makes the API more intuitive: - CountAll() → all deployments regardless of state - CountFromQuery(query) → filtered deployments (can include active filter) Note: CountAll is currently unused in the codebase but is part of the common loader interface pattern. If it's ever used in the future, it will now return the total count across all states as the name implies. User request: "let CountAll actually count all deployments of all states - including soft deleted ones." Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
e095b1f to
89d37dd
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
change me!
User-facing documentation
Testing and quality
Automated testing
How I validated my change
change me!