Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

extensions/file_storage: headerssetterextension + file_storage not working for otlp grpc exporter #11780

Open
varkey98 opened this issue Oct 25, 2024 · 5 comments
Assignees
Labels
bug Something isn't working

Comments

@varkey98
Copy link

varkey98 commented Oct 25, 2024

Component(s)

extension/headerssetter, extension/storage/filestorage, exporter/otlpgrpc

What happened?

Description

When headers setter extension is used in conjunction with file_storage extension, the headers (from context variables) are not being set for the traces exported by otlp grpc trace exporter.

Steps to Reproduce

Setup file_storage extension and headerssetterextensior for adding a header from client metadata.

Expected Result

The otlp server will receive the header value from client metadata in the context.

Actual Result

Getting an empty value for the key.

Collector version

0.112.0

Environment information

Environment

OS: Mac OS X M1
Docker image version: otel/opentelemetry-collector-contrib:0.112.0

OpenTelemetry Collector configuration

extensions:
  file_storage:
    directory: /tmp/
    timeout: 1s
    create_directory: true
    compaction:
      on_start: true
      on_rebound: true
      directory: /tmp/
      max_transaction_size: 65_536
  headers_setter:
    headers:
      - action: insert
        key: agent-token
        from_context: agent-token

receivers:
  otlp:
    protocols:
      grpc:
        include_metadata: true
        endpoint: 0.0.0.0:4317
      http:
        include_metadata: true
        endpoint: 0.0.0.0:4318

processors:
  batch:
    timeout: 200ms
    send_batch_size: 8192
    send_batch_max_size: 10000
    metadata_keys:
      - agent-token
    metadata_cardinality_limit: 10

exporters:
  otlp:
    endpoint: host.docker.internal:5317
    compression: none
    sending_queue:
      storage: file_storage
    auth:
      authenticator: headers_setter
    tls:
      insecure: true
service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [otlp]
  extensions: [headers_setter, file_storage]

Log output

varkeychanjacob@Varkeychans-Laptop *** % docker run --rm -it -v /Users/varkeychanjacob/Projects/****/otelcolconfig.yaml:/etc/otelcol-contrib/config.yaml -p 4317:4317 -v /Users/varkeychanjacob/Projects/***/:/tmp  otel/opentelemetry-collector-contrib:latest
2024-10-25T19:48:38.145Z        info    [email protected]/service.go:135 Setting up own telemetry...
2024-10-25T19:48:38.145Z        info    telemetry/metrics.go:70 Serving metrics {"address": "localhost:8888", "metrics level": "Normal"}
2024-10-25T19:48:38.147Z        info    [email protected]/service.go:207 Starting otelcol-contrib...     {"Version": "0.112.0", "NumCPU": 10}
2024-10-25T19:48:38.147Z        info    extensions/extensions.go:39     Starting extensions...
2024-10-25T19:48:38.147Z        info    extensions/extensions.go:42     Extension is starting...        {"kind": "extension", "name": "headers_setter"}
2024-10-25T19:48:38.147Z        info    extensions/extensions.go:59     Extension started.      {"kind": "extension", "name": "headers_setter"}
2024-10-25T19:48:38.147Z        info    extensions/extensions.go:42     Extension is starting...        {"kind": "extension", "name": "file_storage"}
2024-10-25T19:48:38.147Z        info    extensions/extensions.go:59     Extension started.      {"kind": "extension", "name": "file_storage"}
2024-10-25T19:48:38.149Z        info    [email protected]/client.go:235      finished compaction     {"kind": "extension", "name": "file_storage", "directory": "/tmp/exporter_otlp__traces", "elapsed": 0.000651583}
2024-10-25T19:48:38.149Z        warn    [email protected]/warning.go:40 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks.       {"kind": "receiver", "name": "otlp", "data_type": "traces", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"}
2024-10-25T19:48:38.149Z        info    [email protected]/otlp.go:112       Starting GRPC server    {"kind": "receiver", "name": "otlp", "data_type": "traces", "endpoint": "0.0.0.0:4317"}
2024-10-25T19:48:38.149Z        warn    [email protected]/warning.go:40 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks.       {"kind": "receiver", "name": "otlp", "data_type": "traces", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"}
2024-10-25T19:48:38.149Z        info    [email protected]/otlp.go:169       Starting HTTP server    {"kind": "receiver", "name": "otlp", "data_type": "traces", "endpoint": "0.0.0.0:4318"}
2024-10-25T19:48:38.149Z        info    [email protected]/service.go:230 Everything is ready. Begin running and processing data.
^C2024-10-25T19:50:27.492Z      info    [email protected]/collector.go:328       Received signal from OS {"signal": "interrupt"}
2024-10-25T19:50:27.493Z        info    [email protected]/service.go:266 Starting shutdown...
2024-10-25T19:50:27.505Z        info    extensions/extensions.go:66     Stopping extensions...
2024-10-25T19:50:27.506Z        info    [email protected]/service.go:280 Shutdown complete.

Additional context

No response

@varkey98 varkey98 added the bug Something isn't working label Oct 25, 2024
Copy link
Contributor

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@VihasMakwana
Copy link
Contributor

I'll try to reproduce this and keep you posted.

@varkey98
Copy link
Author

I'll try to reproduce this and keep you posted.

Thank you

@jpkrohling
Copy link
Member

jpkrohling commented Dec 2, 2024

I think this isn't even a problem with the file storage, I think this is deeper, at the persistent queue from exporterqueue. I'm transferring this to the core repo.

@jpkrohling jpkrohling transferred this issue from open-telemetry/opentelemetry-collector-contrib Dec 2, 2024
@VihasMakwana
Copy link
Contributor

Thanks @jpkrohling for transferring.
I can take this up. Can someone from @open-telemetry/collector-triagers assign it to me?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants