open-telemetry / opentelemetry-collector

OpenTelemetry Collector

Home Page:https://opentelemetry.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Ability to propogate collectors self trace context beyong collector's boundary

pavankrish123 opened this issue · comments

Is your feature request related to a problem? Please describe.

We would like to observe the request flow from collector through various processing services. Currently collector provides starts a span when exporting a ExportRequest and ends it after completion of the request but it does not propagate the context (default setting). The spans could be viewed through zpages extension.

While zpages extension actually doesn't need context propagation, cases where we want to stitch collector request (as a client) to processing services down stream need context propagation to get the single view and it will be invaluable in tracking and correlating request and processing workflow.

Describe the solution you'd like

  1. Perhaps we can enable propagation by setting autopropogator which provides to set propagators via env variable OTEL_PROPAGATORS

  2. Be more explicit and allow setting propagators via config in

service:
   telemetry: 
     traces:
        propagators: tracecontext

Additional context
I did a little experiment and added the following code

--- a/service/service.go
+++ b/service/service.go
@@ -17,6 +17,8 @@ package service // import "go.opentelemetry.io/collector/service"
 import (
        "context"
        "fmt"
+       "go.opentelemetry.io/otel"
+       "go.opentelemetry.io/otel/propagation"
 
        "go.opentelemetry.io/otel/metric/nonrecording"
        sdktrace "go.opentelemetry.io/otel/sdk/trace"
@@ -58,6 +60,12 @@ func newService(set *settings) (*service, error) {
                // needed for supporting the zpages extension
                sdktrace.WithSampler(internal.AlwaysRecord()),
        )
+       otel.SetTextMapPropagator(
+               propagation.NewCompositeTextMapPropagator(
+                       propagation.TraceContext{},
+                       propagation.Baggage{},
+               ),
+       )
 

The context is propagated to the downstream services

2022/06/21 19:08:29 request headers: map[Accept-Encoding:[gzip] Content-Encoding:[gzip] Content-Length:[858] Content-Type:[application/x-protobuf] Traceparent:[00-f826f9c695d706ccde4ed3a514813d46-3aed7ac6ab07b2fb-00] User-Agent:[Local OpenTelemetry Collector binary, testing only./0.53.0-dev (darwin/amd64)]]

cc: @svrnm @pureklkl

Sounds like a great idea, and a best practice to follow :-)

"Don't break the chain" is the first rule of observable services.

Thanks @jpkrohling - I will work on a PR.

Does this configuration makes sense?

service:
  telemetry: 
    traces:
        propagators: tracecontext

Sounds good, and I think it should be enabled by default.

What is the expected behaviour for incoming requests containing a trace context when the batch processor is used in a pipeline? As with the k8sattributes processor (which uses Context), I assume this influences the context used in any component after the batch processor? Or is the span created on export always the root span of a new trace?

ctx, _ = exp.tracer.Start(ctx, spanName)
would mean that it picks up the current context. I assume this will not have an active span context after the batch processor so a new trace is started?

span

The ExportRequests are always the root span. I don't think the incoming requests contexts can be preserved as you rightly called out.