Skip to content

Commit

Permalink
Merge branch 'main' into kgeckhart/update-cloudwatch-exporter-dependency
Browse files Browse the repository at this point in the history
  • Loading branch information
andriikushch authored Jun 18, 2024
2 parents 53a174d + e152752 commit 49d49a6
Show file tree
Hide file tree
Showing 57 changed files with 450 additions and 72 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/trivy.yml
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ jobs:
- name: Checkout code
uses: actions/checkout@v4
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@b2933f565dbc598b29947660e66259e3c7bc8561
uses: aquasecurity/trivy-action@595be6a0f6560a0a8fc419ddf630567fc623531d
with:
image-ref: 'grafana/agent:main'
format: 'template'
Expand Down
24 changes: 24 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,28 @@ internal API changes are not present.
Main (unreleased)
-----------------

### Features

- A new `otelcol.exporter.debug` component for printing OTel telemetry from
other `otelcol` components to the console. (@BarunKGP)

v0.41.1 (2024-06-07)
--------------------

### Breaking changes

- Applied OpenTelemetry [CVE-2024-36129](https://github.com/open-telemetry/opentelemetry-collector/security/advisories/GHSA-c74f-6mfw-mm4v) fixes. (@mattdurham)
- Components `otelcol.receiver.otlp`,`otelcol.receiver.zipkin` and `otelcol.receiver.jaeger` setting `max_request_body_size`
default changed from unlimited size to `20MiB`.

### Enhancements

- Updated pyroscope to v0.4.6 introducing `symbols_map_size` and `pid_map_size` configuration. (@simonswine)


v0.41.0 (2024-05-31)
--------------------

### Breaking changes

- The default listen port for `otelcol.receiver.opencensus` has changed from
Expand Down Expand Up @@ -40,6 +62,8 @@ Main (unreleased)

- Added support for `otelcol` configuration conversion in `grafana-agent convert` and `grafana-agent run` commands. (@rfratto, @erikbaranowski, @tpaschalis, @hainenber)

- Prefix Faro measurement values with `value_` to align with the latest Faro cloud receiver updates. (@codecapitano)

- Added support for `static` configuration conversion of the `traces` subsystem. (@erikbaranowski, @wildum)

- Add automatic conversion for `legacy_positions_file` in component `loki.source.file`. (@mattdurham)
Expand Down
2 changes: 1 addition & 1 deletion docs/developer/release/8-update-helm-charts.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ Our Helm charts require some version updates as well.

4. Create a branch from `main` for [grafana/agent](https://github.com/grafana/agent).

5. Update the helm chart code in `$agentRepo/operations/helm`:
5. Update the helm chart code in `operations/helm/charts/grafana-agent`:

1. Update `Chart.yaml` with the new helm version and app version.
2. Update `CHANGELOG.md` with a new section for the helm version.
Expand Down
2 changes: 1 addition & 1 deletion docs/sources/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ title: Grafana Agent
description: Grafana Agent is a flexible, performant, vendor-neutral, telemetry collector
weight: 350
cascade:
AGENT_RELEASE: v0.40.5
AGENT_RELEASE: v0.41.1
OTEL_VERSION: v0.96.0
refs:
variants:
Expand Down
1 change: 1 addition & 0 deletions docs/sources/flow/reference/compatibility/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -287,6 +287,7 @@ The following components, grouped by namespace, _export_ OpenTelemetry `otelcol.
- [otelcol.connector.servicegraph](../components/otelcol.connector.servicegraph)
- [otelcol.connector.spanlogs](../components/otelcol.connector.spanlogs)
- [otelcol.connector.spanmetrics](../components/otelcol.connector.spanmetrics)
- [otelcol.exporter.debug](../components/otelcol.exporter.debug)
- [otelcol.exporter.loadbalancing](../components/otelcol.exporter.loadbalancing)
- [otelcol.exporter.logging](../components/otelcol.exporter.logging)
- [otelcol.exporter.loki](../components/otelcol.exporter.loki)
Expand Down
102 changes: 102 additions & 0 deletions docs/sources/flow/reference/components/otelcol.exporter.debug.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,102 @@
---
aliases:
- /docs/grafana-cloud/agent/flow/reference/components/otelcol.exporter.debug/
- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.exporter.debug/
- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.exporter.debug/
- /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.exporter.debug/
canonical: https://grafana.com/docs/agent/latest/flow/reference/components/otelcol.exporter.debug/
description: Learn about otelcol.exporter.debug
labels:
stage: experimental
title: otelcol.exporter.debug
---

# otelcol.exporter.debug

`otelcol.exporter.debug` accepts telemetry data from other `otelcol` components and writes them to the console (stderr).
You can control the verbosity of the logs.

{{< admonition type="note" >}}
`otelcol.exporter.debug` is a wrapper over the upstream OpenTelemetry Collector `debug` exporter.
If necessary, bug reports or feature requests are redirected to the upstream repository.
{{< /admonition >}}

Multiple `otelcol.exporter.debug` components can be specified by giving them different labels.

## Usage

```river
otelcol.exporter.debug "LABEL" { }
```

## Arguments

`otelcol.exporter.debug` supports the following arguments:

Name | Type | Description | Default | Required
---- | ---- | ----------- | ------- | --------
`verbosity` | `string` | Verbosity of the generated logs. | `"normal"` | no
`sampling_initial` | `int` | Number of messages initially logged each second. | `2` | no
`sampling_thereafter` | `int` | Sampling rate after the initial messages are logged. | `500` | no

The `verbosity` argument must be one of `"basic"`, `"normal"`, or `"detailed"`.

## Exported fields

The following fields are exported and can be referenced by other components:

Name | Type | Description
---- | ---- | -----------
`input` | `otelcol.Consumer` | A value that other components can use to send telemetry data to.

`input` accepts `otelcol.Consumer` data for any telemetry signal (metrics,
logs, or traces).

## Component health

`otelcol.exporter.debug` is only reported as unhealthy if given an invalid
configuration.

## Debug information

`otelcol.exporter.debug` does not expose any component-specific debug
information.

## Example

This example scrapes Prometheus UNIX metrics and writes them to the console:

```river
prometheus.exporter.unix "default" { }
prometheus.scrape "default" {
targets = prometheus.exporter.unix.default.targets
forward_to = [otelcol.receiver.prometheus.default.receiver]
}
otelcol.receiver.prometheus "default" {
output {
metrics = [otelcol.exporter.debug.default.input]
}
}
otelcol.exporter.debug "default" {
verbosity = "detailed"
sampling_initial = 1
sampling_thereafter = 1
}
```
<!-- START GENERATED COMPATIBLE COMPONENTS -->

## Compatible components

`otelcol.exporter.debug` has exports that can be consumed by the following components:

- Components that consume [OpenTelemetry `otelcol.Consumer`](../../compatibility/#opentelemetry-otelcolconsumer-consumers)

{{< admonition type="note" >}}
Connecting some components may not be sensible or components may require further configuration to make the connection work correctly.
Refer to the linked documentation for more details.
{{< /admonition >}}

<!-- END GENERATED COMPATIBLE COMPONENTS -->
Original file line number Diff line number Diff line change
Expand Up @@ -164,7 +164,7 @@ The following arguments are supported:
Name | Type | Description | Default | Required
---- | ---- | ----------- | ------- | --------
`endpoint` | `string` | `host:port` to listen for traffic on. | `"0.0.0.0:14268"` | no
`max_request_body_size` | `string` | Maximum request body size the server will allow. No limit when unset. | | no
`max_request_body_size` | `string` | Maximum request body size the server will allow. | `20MiB` | no
`include_metadata` | `boolean` | Propagate incoming connection metadata to downstream consumers. | | no

### cors block
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -142,7 +142,7 @@ The following arguments are supported:
Name | Type | Description | Default | Required
---- | ---- | ----------- | ------- | --------
`endpoint` | `string` | `host:port` to listen for traffic on. | `"0.0.0.0:4318"` | no
`max_request_body_size` | `string` | Maximum request body size the server will allow. No limit when unset. | | no
`max_request_body_size` | `string` | Maximum request body size the server will allow. | `20MiB` | no
`include_metadata` | `boolean` | Propagate incoming connection metadata to downstream consumers. | | no
`traces_url_path` | `string` | The URL path to receive traces on. | `"/v1/traces"`| no
`metrics_url_path` | `string` | The URL path to receive metrics on. | `"/v1/metrics"` | no
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ Name | Type | Description | Default | Required
---- | ---- | ----------- | ------- | --------
`parse_string_tags` | `bool` | Parse string tags and binary annotations into non-string types. | `false` | no
`endpoint` | `string` | `host:port` to listen for traffic on. | `"0.0.0.0:9411"` | no
`max_request_body_size` | `string` | Maximum request body size the HTTP server will allow. No limit when unset. | | no
`max_request_body_size` | `string` | Maximum request body size the server will allow. | `20MiB` | no
`include_metadata` | `boolean` | Propagate incoming connection metadata to downstream consumers. | | no

If `parse_string_tags` is `true`, string tags and binary annotations are
Expand Down
11 changes: 11 additions & 0 deletions docs/sources/flow/release-notes.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,13 +29,24 @@ Other release notes for the different {{< param "PRODUCT_ROOT_NAME" >}} variants
[release-notes-operator]: {{< relref "../operator/release-notes.md" >}}
{{< /admonition >}}

## v0.41.1

### Breaking change: `max_request_body_size` for `otelcol.receiver.otlp`,`otelcol.receiver.zipkin`,`otelcol.receiver.jaeger` changed

The default value for `max_request_body_size` changed from unlimited to `20 MiB`. There is no ability to change `max_request_body_size`
to accept unlimited requests.

## v0.41

### Breaking change: default `otelcol.receiver.opencensus` list port changed

The default listen port for `otelcol.receiver.opencensus` has changed from 4317 to 55678 to align with the upstream defaults.
To retain the previous listen port, explicitly set the `endpoint` argument to `0.0.0.0:4317` before upgrading.

### Breaking change: default `mimir.rules.kubernetes` sync interval changed

The default sync interval for `mimir.rules.kubernetes` has changed from `30s` to `5m` to reduce load on Mimir.

## v0.40

### Breaking change: Prohibit the configuration of services within modules.
Expand Down
4 changes: 3 additions & 1 deletion docs/sources/flow/tasks/configure/configure-kubernetes.md
Original file line number Diff line number Diff line change
Expand Up @@ -99,6 +99,7 @@ Use this method if you prefer to embed your {{< param "PRODUCT_NAME" >}} configu

```yaml
agent:
mode: "flow"
configMap:
content: |-
// Write your Agent config here:
Expand Down Expand Up @@ -142,7 +143,8 @@ Use this method if you prefer to write your {{< param "PRODUCT_NAME" >}} configu
1. Modify Helm Chart's configuration in your `values.yaml` to use the existing ConfigMap:

```yaml
agent:
agent:
mode: "flow"
configMap:
create: false
name: agent-config
Expand Down
12 changes: 5 additions & 7 deletions docs/sources/flow/tasks/debug.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ Follow these steps to debug issues with {{< param "PRODUCT_NAME" >}}:
### Home page

![](../../assets/ui_home_page.png)
![The Agent UI home page showing a table of components.](/media/docs/agent/ui_home_page.png)

The home page shows a table of components defined in the configuration file and their health.

Expand All @@ -75,14 +75,14 @@ Click the {{< param "PRODUCT_ROOT_NAME" >}} logo to navigate back to the home pa

### Graph page

![](../../assets/ui_graph_page.png)
![The Graph page showing a graph view of components.](/media/docs/agent/ui_graph_page.png)

The **Graph** page shows a graph view of components defined in the configuration file and their health.
Clicking a component in the graph navigates to the [Component detail page](#component-detail-page) for that component.

### Component detail page

![](../../assets/ui_component_detail_page.png)
![The component detail page showing detailed information about the components.](/media/docs/agent/ui_component_detail_page.png)

The component detail page shows the following information for each component:

Expand All @@ -95,9 +95,9 @@ The component detail page shows the following information for each component:
### Clustering page

![](../../assets/ui_clustering_page.png)
![The Clustering page showing detailed information about each cluster node.](/media/docs/agent/ui_clustering_page.png)

The clustering page shows the following information for each cluster node:
The Clustering page shows the following information for each cluster node:

* The node's name.
* The node's advertised address.
Expand Down Expand Up @@ -144,5 +144,3 @@ Some issues that appear to be clustering issues may be symptoms of other issues,
for example, problems with scraping or service discovery can result in missing
metrics for an agent that can be interpreted as a node not joining the cluster.
{{< /admonition >}}


11 changes: 5 additions & 6 deletions docs/sources/flow/tasks/opentelemetry-to-lgtm-stack.md
Original file line number Diff line number Diff line change
Expand Up @@ -165,7 +165,7 @@ loki.write "default" {
To use Loki with basic-auth, which is required with Grafana Cloud Loki, you must configure the [loki.write](ref:loki.write) component.
You can get the Loki configuration from the Loki **Details** page in the [Grafana Cloud Portal][]:

![](../../../assets/tasks/loki-config.png)
![The Loki Details page showing information about the Loki configuration.](/media/docs/agent/loki-config.png)

```river
otelcol.exporter.loki "grafana_cloud_loki" {
Expand Down Expand Up @@ -200,7 +200,7 @@ otelcol.exporter.otlp "default" {
To use Tempo with basic-auth, which is required with Grafana Cloud Tempo, you must use the [otelcol.auth.basic](ref:otelcol.auth.basic) component.
You can get the Tempo configuration from the Tempo **Details** page in the [Grafana Cloud Portal][]:

![](../../../assets/tasks/tempo-config.png)
![The Tempo Details page showing information about the Tempo configuration.](/media/docs/agent//tempo-config.png)

```river
otelcol.exporter.otlp "grafana_cloud_tempo" {
Expand Down Expand Up @@ -237,7 +237,7 @@ prometheus.remote_write "default" {
To use Prometheus with basic-auth, which is required with Grafana Cloud Prometheus, you must configure the [prometheus.remote_write](ref:prometheus.remote_write) component.
You can get the Prometheus configuration from the Prometheus **Details** page in the [Grafana Cloud Portal][]:

![](../../../assets/tasks/prometheus-config.png)
![The Prometheus Details page showing information about the Prometheus configuration.](/media/docs/agent/prometheus-config.png)

```river
otelcol.exporter.prometheus "grafana_cloud_prometheus" {
Expand Down Expand Up @@ -361,14 +361,13 @@ ts=2023-05-09T09:37:15.304109Z component=otelcol.receiver.otlp.default level=inf
ts=2023-05-09T09:37:15.304234Z component=otelcol.receiver.otlp.default level=info msg="Starting HTTP server" endpoint=0.0.0.0:4318
```

You can now check the pipeline graphically by visiting http://localhost:12345/graph
You can now check the pipeline graphically by visiting <http://localhost:12345/graph>

![](../../../assets/tasks/otlp-lgtm-graph.png)
![The Graph page showing a graphical representation of the pipeline.](/media/docs/agent/otlp-lgtm-graph.png)

[OpenTelemetry]: https://opentelemetry.io
[Grafana Loki]: https://grafana.com/oss/loki/
[Grafana Tempo]: https://grafana.com/oss/tempo/
[Grafana Cloud Portal]: https://grafana.com/docs/grafana-cloud/account-management/cloud-portal#your-grafana-cloud-stack
[Prometheus Remote Write]: https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage
[Grafana Mimir]: https://grafana.com/oss/mimir/

2 changes: 1 addition & 1 deletion docs/sources/operator/release-notes.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ refs:
release-notes-static:
- pattern: /docs/agent/
destination: /docs/agent/<AGENT_VERSION>/static/release-notes/
- pattern: /docs/agent/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/send-data/agent/static/release-notes/
release-notes-flow:
- pattern: /docs/agent/
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -358,27 +358,27 @@ pick the ones you need.
`length` controls how far back in time CloudWatch metrics are considered during each agent scrape.
If both settings are configured, the time parameters when calling CloudWatch APIs work as follows:

![](https://grafana.com/media/docs/agent/cloudwatch-period-and-length-time-model-2.png)
![A diagram showing how the time parameters work when both period and length are configured.](/media/docs/agent/cloudwatch-period-and-length-time-model-2.png)

As noted above, if there is a different `period` or `length` across multiple metrics under the same static or discovery job,
As noted above, if there is a different `period` or `length` across multiple metrics under the same static or discovery job,
the minimum of all periods, and maximum of all lengths is configured.

On the other hand, if `length` is not configured, both period and length settings are calculated based on
On the other hand, if `length` isn't configured, both period and length settings are calculated based on
the required `period` configuration attribute.

If all metrics within a job (discovery or static) have the same `period` value configured, CloudWatch APIs will be
requested for metrics from the scrape time, to `period`s seconds in the past.
requested for metrics from the scrape time, to `period`s seconds in the past.
The values of these metrics are exported to Prometheus.

![](https://grafana.com/media/docs/agent/cloudwatch-single-period-time-model.png)
![A diagram showing how the time parameters work when a single period is configured.](/media/docs/agent/cloudwatch-single-period-time-model.png)

On the other hand, if metrics with different `period`s are configured under an individual job, this works differently.
First, two variables are calculated aggregating all periods: `length`, taking the maximum value of all periods, and
the new `period` value, taking the minimum of all periods. Then, CloudWatch APIs will be requested for metrics from
`now - length` to `now`, aggregating each in samples for `period` seconds. For each metric, the most recent sample
is exported to CloudWatch.

![](https://grafana.com/media/docs/agent/cloudwatch-multiple-period-time-model.png)
![A diagram showing how the time parameters work when multiple periods are configured.](/media/docs/agent/cloudwatch-multiple-period-time-model.png)

## Supported services in discovery jobs

Expand Down
Loading

0 comments on commit 49d49a6

Please sign in to comment.