Running streams in a docker container
FrankIversen opened this issue · comments
Description
We are trying to use kafka-streams-dotnet in a docker container and we are following the examples in kafka-streams-dotnet-samples in the setup of the docker container.
We are however hitting a snag. The docker container begins without an issue, but it goes dead when trying to create a stream application
The last thing we see in the log is an info message showing the entire stream config and then after that we get nothing.
the log message reads in the beginning "Start creation of the stream application with this configuration:"
Is this something you have experienced setting up streams in a docker container? Currently we have very little to go on. Do you have an idea how we can move forward? It works fine as long as we dont run it in a container.
How to reproduce
Checklist
Please provide the following information:
- A complete (i.e. we can run it), minimal program demonstrating the problem. No need to supply a project file.
- A code snippet with your topology builder (ex: builder.Stream<string, string>("topic").to("an-another-topic");)
- Streamiz.Kafka.Net nuget version.
- Apache Kafka version.
- Client configuration.
- Operating system.
- Provide logs (with in debug mode (log4net and StreamConfig.Debug) as necessary in configuration).
- Critical issue.
Hi @LGouellec
This is the dockerfile:
FROM mcr.microsoft.com/dotnet/aspnet:7.0-alpine AS base
WORKDIR /app
EXPOSE 3333
EXPOSE 443
FROM mcr.microsoft.com/dotnet/sdk:7.0-alpine AS build
COPY ["./nuget.config", "./streamService/"]
ENV PATH="${PATH}:/root/.dotnet/tools"
RUN dotnet tool install -g RecreateSolutionStructure
RUN export PATH="$PATH:/root/.dotnet/tools"
COPY ["Directory.Build.props", ".editorconfig","stream.service.sln", "./streamService/"]
COPY ["stream.service.sln", "src//.csproj", "src///.csproj","tests//.csproj", "tests///.csproj", "./streamService/"]
COPY ["./src/stream.service.Application/", "./streamService/src/stream.service.Application/"]
COPY ["./src/stream.service.Client.WebApi/", "./streamService/src/stream.service.Client.WebApi/"]
COPY ["./src/stream.service.Domain/", "./streamService/src/stream.service.Domain/"]
COPY ["./src/stream.service.Infrastructure/", "./streamService/src/stream.service.Infrastructure/"]
COPY ["./src/logging.lib/", "./streamService/src/logging.lib/"]
COPY ["./src/util.schema.events/", "./streamService/src/util.schema.events/"]
COPY ["./src/uuid.generator.lib/", "./streamService/src/uuid.generator.lib/"]
RUN recreate-sln-structure "./streamService/stream.service.sln"
RUN dotnet restore "./streamService/stream.service.sln" --configfile "./streamService/nuget.config"
RUN dotnet build "./streamService/src/stream.service.Client.WebApi/stream.service.Client.WebApi.csproj"
FROM build AS publish
RUN dotnet publish "./streamService/src/stream.service.Client.WebApi/stream.service.Client.WebApi.csproj" -c Release --property:PublishDir=/app/publish
FROM base AS final
ARG ROCKSDB_VERSION=v7.4.3
WORKDIR /app
COPY --from=publish /app/publish .
COPY --from=build "./streamService/src/stream.service.Client.WebApi/Properties" ./Properties/
RUN apk add --no-cache rocksdb libstdc++ bzip2 lz4
RUN ln -s /usr/lib/librocksdb.so.7 /usr/lib/librocksdb.so
RUN chmod a+x "stream.service.Client.WebApi.dll"
EXPOSE 3333
ENV ASPNETCORE_URLS=http://+:3333
ENV SERVICE_ENVIRONMENT=dev3
ENTRYPOINT ["sh", "-c", "dotnet stream.service.Client.WebApi.dll --environment=$SERVICE_ENVIRONMENT"]
@FrankIversen
Can you enable the debug logs, rebuild your docker image and share all the logs please ?
var config = new StreamConfig<StringSerDes, StringSerDes>
{
ApplicationId = $"test-app",
BootstrapServers = "localhost:9092",
AutoOffsetReset = AutoOffsetReset.Earliest,
Logger = LoggerFactory.Create(b =>
{
b.SetMinimumLevel(LogLevel.Debug);
b.AddConsole();
}),
Debug = "all"
};
yes, here we go:
info: streamService.Program[0]
Starting up streamService....
info: streamService.Program[0]
ENV=dev3
info: Streamiz.Kafka.Net.KafkaStream[0]
stream-application[<app.id>] Start creation of the stream application with this configuration:
Stream property:
client.id: pricing
num.stream.threads: 1
default.key.serdes: Streamiz.Kafka.Net.SerDes.StringSerDes
default.value.serdes: Streamiz.Kafka.Net.SerDes.StringSerDes
default.timestamp.extractor: Streamiz.Kafka.Net.Processors.Internal.FailOnInvalidTimestamp
commit.interval.ms: 10000
processing.guarantee: AT_LEAST_ONCE
transaction.timeout: 00:00:10
poll.ms: 100
max.poll.records: 500
max.poll.restoring.records: 1000
max.task.idle.ms: 0
buffered.records.per.partition: 2147483647
inner.exception.handler: System.Func2[System.Exception,Streamiz.Kafka.Net.ExceptionHandlerResponse] production.exception.handler: System.Func
2[Confluent.Kafka.DeliveryReport2[System.Byte[],System.Byte[]],Streamiz.Kafka.Net.ExceptionHandlerResponse] deserialization.exception.handler: System.Func
4[Streamiz.Kafka.Net.ProcessorContext,Confluent.Kafka.ConsumeResult2[System.Byte[],System.Byte[]],System.Exception,Streamiz.Kafka.Net.ExceptionHandlerResponse] rocksdb.config.setter: System.Action
2[System.String,Streamiz.Kafka.Net.State.RocksDb.RocksDbOptions]
follow.metadata: False
state.dir: /tmp/streamiz-kafka-net
replication.factor: 1
windowstore.changelog.additional.retention.ms: 86400000
offset.checkpoint.manager:
metrics.interval.ms: 30000
metrics.recording.level: INFO
log.processing.summary: 00:01:00
metrics.reporter: System.Action1[System.Collections.Generic.IEnumerable
1[Streamiz.Kafka.Net.Metrics.Sensor]]
expose.librdkafka.stats: False
start.task.delay.ms: 5000
parallel.processing: False
max.degree.of.parallelism: 8
application.id: <app.id>
Client property:
sasl.mechanism: PLAIN
security.protocol: sasl_ssl
debug: all
sasl.username:
sasl.password: ********
bootstrap.servers:
Consumer property:
max.poll.interval.ms: 300000
enable.auto.commit: False
enable.auto.offset.store: False
partition.assignment.strategy: cooperative-sticky
auto.offset.reset: earliest
session.timeout.ms: 45000
fetch.wait.max.ms: 60000
Producer property:
partitioner: murmur2_random
request.timeout.ms: 10000
Admin client property:
None
dbug: Streamiz.Kafka.Net.Kafka.Internal.KafkaLoggerAdapter[0]
Log admin Unknown - [thrd:app]: Selected provider PLAIN (builtin) for SASL mechanism PLAIN
dbug: Streamiz.Kafka.Net.Kafka.Internal.KafkaLoggerAdapter[0]
Log admin Unknown - [thrd:app]: Using statically linked OpenSSL version OpenSSL 3.0.8 7 Feb 2023 (0x30000080, librdkafka built with 0x30000080)
dbug: Streamiz.Kafka.Net.Kafka.Internal.KafkaLoggerAdapter[0]
Log admin Unknown - [thrd:app]: Setting default CA certificate location to /etc/ssl/certs/ca-certificates.crt, override with ssl.ca.location
Segmentation fault (core dumped)
Look strange, your application seg fault just at the beginning
@FrankIversen
Your application run well with an executable on a virtual machine for instance ?
What is the host environment where this container is running ?
@LGouellec we traced the problem to the streamsconfig. When we disabled these two lines as seen below, everything was running smoothly
It is probably only one the lines which is a problem, and it is probably the implementation itself of those handlers, but it was enough to have the entire thing grind to a halt with the segmentation error as the result.
Issue closed due to no longer from Streamiz