Jaeger Tracing: No traceparent header propagation
Poweranimal opened this issue · comments
Preflight checklist
- I could not find a solution in the existing issues, docs, nor discussions.
- I agree to follow this project's Code of Conduct.
- I have read and am following this repository's Contribution Guidelines.
- This issue affects my Ory Cloud project.
- I have joined the Ory Community Slack.
- I am signed up to the Ory Security Patch Newsletter.
Describe the bug
The Jaeger tracer propagation does not have any effect.
In fact, regardless of any value set for propagation, no tracing information at all are forward to the target by oathkeeper.
In order for tracing to work, oathkeeper must forward tracing information to the target it proxies to (e.g. W3C-Headers, B3-Headers, etc.).
Reproducing the bug
- Copy this docker-compose file:
services:
oathkeeper:
image: docker.io/oryd/oathkeeper:v0.39.3
entrypoint:
- ash
- -ec
command:
- |
echo "$$OATHKEEPER_CONFIG" > /tmp/config.yaml && \
echo "$$OATHKEEPER_ACCESS_RULES" > /tmp/access-rules.json && \
oathkeeper serve --config /tmp/config.yaml
environment:
OATHKEEPER_CONFIG: |-
log:
level: debug
tracing:
provider: jaeger
providers:
jaeger:
propagation: b3
local_agent_address: otc-service:6831
access_rules:
repositories: ["file:///tmp/access-rules.json"]
authenticators:
noop: {enabled: true}
authorizers:
allow: {enabled: true}
mutators:
noop: {enabled: true}
header:
enabled: true
config:
headers: {}
OATHKEEPER_ACCESS_RULES: |-
[
{
"id": "hello",
"upstream": {
"url": "http://fake-service:9090"
},
"match": {
"methods": ["GET"],
"url": "<.*>"
},
"authenticators": [
{ "handler": "noop" }
],
"authorizer": { "handler": "allow" },
"mutators": [
{
"handler": "header",
"config": {
"headers": {
"hello": "world",
"all": "{{ print . }}"
}
}
}
]
}
]
ports:
- 4455:4455
networks:
- custom-network
otc-service:
build:
context: ./otelcol
dockerfile: Dockerfile
entrypoint:
- sh
- -ec
command:
- |
echo "$$OTC_CONFIG" > /tmp/config.yaml && \
otelcol --config=/tmp/config.yaml
environment:
OTC_CONFIG: |-
receivers:
jaeger:
protocols:
thrift_compact: {endpoint: "0.0.0.0:6831"}
exporters:
logging: {loglevel: debug}
service:
telemetry: {logs: {level: info}}
pipelines:
traces:
receivers: ["jaeger"]
exporters: ["logging"]
networks:
custom-network:
aliases:
- otc-service
fake-service:
image: docker.io/nicholasjackson/fake-service:v0.24.2
networks:
custom-network:
aliases:
- fake-service
networks:
custom-network:
- Copy this Dockerfile in
./otelcol/Dockerfile
.FROM docker.io/alpine:3.16.2 RUN apk add --no-cache shadow &&\ wget -O otelcol.apk https://github.com/open-telemetry/opentelemetry-collector-releases/releases/download/v0.54.0/otelcol_0.54.0_linux_amd64.apk &&\ apk add --allow-untrusted otelcol.apk &&\ rm -f otelcol.apk
- Run
docker-compose up
. - Run
curl 127.0.0.1:4455
- The logs of
fake-service
should log a header containing information about the trace parent, but they don't.
Relevant log output
oathkeeper_1 | time=2022-09-05T19:12:51Z level=info msg=started handling request http_request=map[headers:map[accept:*/* user-agent:curl/7.82.0] host:127.0.0.1:4455 method:GET path:/ query:<nil> remote:10.89.0.67:60854 scheme:http]
fake-service_1 | 2022-09-05T19:12:51.135Z [INFO] Handle inbound request: request="GET / HTTP/1.1
fake-service_1 | Host: fake-service:9090
fake-service_1 | accept: */*
fake-service_1 | all: &{ map[] map[] {[http://127.0.0.1:4455/] http://127.0.0.1:4455/ GET map[Accept:[*/*] User-Agent:[curl/7.82.0]]}}
fake-service_1 | hello: world
fake-service_1 | x-forwarded-for: 10.89.0.67
fake-service_1 | accept-encoding: gzip
fake-service_1 | user-agent: curl/7.82.0"
oathkeeper_1 | time=2022-09-05T19:12:51Z level=warning msg=Access request granted audience=application granted=true http_host=fake-service:9090 http_method=GET http_url=http://fake-service:9090/ http_user_agent=curl/7.82.0 service_name=ORY Oathkeeper service_version=v0.39.3-pre.0 subject=
fake-service_1 | 2022-09-05T19:12:51.135Z [INFO] Finished handling request: duration=96.924µs
oathkeeper_1 | time=2022-09-05T19:12:51Z level=info msg=completed handling request http_request=map[headers:map[accept:*/* user-agent:curl/7.82.0] host:127.0.0.1:4455 method:GET path:/ query:<nil> remote:10.89.0.67:60854 scheme:http] http_response=map[status:200 text_status:OK took:1.573167ms]
otc-service_1 | 2022-09-05T19:12:51.608Z INFO loggingexporter/logging_exporter.go:43 TracesExporter {"#spans": 1}
otc-service_1 | 2022-09-05T19:12:51.608Z DEBUG loggingexporter/logging_exporter.go:52 ResourceSpans #0
otc-service_1 | Resource SchemaURL:
otc-service_1 | Resource labels:
otc-service_1 | -> service.name: STRING(ORY Oathkeeper)
otc-service_1 | -> opencensus.exporterversion: STRING(Jaeger-Go-2.22.1)
otc-service_1 | -> host.name: STRING(52c4b67e042b)
otc-service_1 | -> ip: STRING(10.89.0.67)
otc-service_1 | -> client-uuid: STRING(191dd818723da04e)
otc-service_1 | ScopeSpans #0
otc-service_1 | ScopeSpans SchemaURL:
otc-service_1 | InstrumentationScope
otc-service_1 | Span #0
otc-service_1 | Trace ID : 00000000000000006b79a224d877ee48
otc-service_1 | Parent ID :
otc-service_1 | ID : 6b79a224d877ee48
otc-service_1 | Name : /
otc-service_1 | Kind : SPAN_KIND_UNSPECIFIED
otc-service_1 | Start time : 2022-09-05 19:12:51.134556 +0000 UTC
otc-service_1 | End time : 2022-09-05 19:12:51.136227 +0000 UTC
otc-service_1 | Status code : STATUS_CODE_UNSET
otc-service_1 | Status message :
otc-service_1 | Attributes:
otc-service_1 | -> sampler.type: STRING(const)
otc-service_1 | -> sampler.param: BOOL(true)
otc-service_1 | -> http.method: STRING(GET)
otc-service_1 | -> http.status_code: INT(200)
otc-service_1 |
Relevant configuration
log:
level: debug
tracing:
provider: jaeger
providers:
jaeger:
propagation: b3
local_agent_address: otc-service:6831
access_rules:
repositories: ["file:///tmp/access-rules.json"]
authenticators:
noop: {enabled: true}
authorizers:
allow: {enabled: true}
mutators:
noop: {enabled: true}
header:
enabled: true
config:
headers: {}
Version
v0.39.3
On which operating system are you observing this issue?
No response
In which environment are you deploying?
No response
Additional Context
No response
@Poweranimal did you find any workaround for it?
Unfortunately not.
I stopped using oathkeeper and replaced it with envoy proxy
is this something Ory is looking to fix?
Hi, not actively. Our tracing set up works in our prod for Oathkeeper so I'm not sure what the problem here is and don't have the time to investigate :/
I will work on it!
@nicolasburtey I have tested and tracing is being propagated. I also added trace to mutator requests. Could you check it, pls?
thanks for letting me know.
@ntheile could you check this?
I found another problem that I will open another PR. You can use the following workaround to make it works:
config.yaml
tracing:
service_name: ory-oathkeeper
provider: jaeger
providers:
jaeger:
local_agent_address: 127.0.0.1:6831
Don't configure sampling for now.
Thanks @daviddelucca
I was able to test your PR and was able to see some traces go thru Honeycomb!
awesome, @ntheile