reportportal / service-api

Report portal. Main API Service

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

How can I change the logging level of the reportportal/service-api ?

tzahialt opened this issue · comments

Hello,

I'm running a RepotPortal 5 self-hosted cluster, from a docker-compose file.

We configured the ReportPortal services to send their own logging messages to an ELK central logging service.

We have observed that ever since switching to ReportPortal 5's async reporting, outrELK ReportPortal log indices are flooded with hundreds of GB of log messages daily.

After some investigation, we identified that most of it is coming from DEBUG log messages of the API services, and more specifically from the **com.epam.ta.reportportal.ws package (LogAsyncController.createLog, TestItemAsyncController. finishTestItem, and AsyncReportingListener.onMessage (not exception logs)).

For every reporting API request/response, we get multiple DEBUG log messages from these classes.

I see here https://github.com/reportportal/service-api/blob/d16a30d3d4f6430c0fe45e3df2208dac7e21887e/src/main/resources/application.yaml that the default logging level for these packages is set to debug

We have tried to change the log level settings for this package, via our docker-compose.yml file. We tried with various methods shown below, but none of them changed the package's logging level - it keeps logging DEBUG messages.

api:
image: reportportal/service-api:5.3.5
...
environment:
- logging.level.root=info
- logging.level.com.epam.ta.reportportal.ws.controller=info
- logging.level.com.epam.ta.reportportal.ws.rabbit=info

also :
- LOGGING_LEVEL=info

and :
- SPRING_APPLICATION_JSON:
'{"logging.level.root": "INFO"}'

or :
- SPRING_APPLICATION_JSON:
- '{"logging.level.com.epam.ta.reportportal.ws.controller": "info", "logging.level.com.epam.ta.reportportal.ws.rabbit": "info"}'

plus :
- LOGGING_LEVEL_COM_EPAM=info

Please advise how this can be achieved.

Thanks.

Help, anyone ?

Seconded. Without building a custom docker image, I don't see how to do this and having 'debug' level logging sent to our Elastic stack is not much fun.

This is still an issue and it is absolutely flooding our logging infrastructure. Could we please get a solution to decrease logging?

Hello. Not sure if this issue is still relevant, but still.
Have you tried passing LOGGING_LEVEL_ROOT=info as env variable in the docker compose?

Hello. Not sure if this issue is still relevant, but still. Have you tried passing LOGGING_LEVEL_ROOT=info as env variable in the docker compose?

Hi Vadym,
Yes, unfortunately this is still relevant. We have given up on it long ago, so I don't remember all the details, but I think I recorded all my experiments in the original post, and it seems like I never tried the exact one you're suggesting, so I'll give it a try when I get to it.

Thanks a lot for your input.

LOGGING_LEVEL_ROOT=info

I just found that I tried this in the past, and it didn't work either.

Hi!
After some time struggling with reportportal system setup I found working solution. At least I didn'tt find any debug messages matching packages com.epam.ta.reportportal.ws.controller and com.epam.ta.reportportal.ws.rabbit in the stdout.
I'm adding docker-compose.yml snippet below for api-service only. Feel free to use any log level, I used warn as example.

// rest of the services

api:
    image: reportportal/service-api:5.7.4
    depends_on:
      rabbitmq:
        condition: service_healthy
      gateway:
        condition: service_started
      postgres:
        condition: service_healthy
    environment:
      ## Double entry moves test logs from PostgreSQL to Elastic-type engines
      ## Ref: https://reportportal.io/blog/double-entry-in-5.7.2
      RP_ELASTICSEARCHLOGMESSAGE_HOST: "true"
      RP_DB_HOST: postgres
      RP_DB_USER: rpuser
      RP_DB_PASS: rppass
      RP_DB_NAME: reportportal
      RP_AMQP_USER: rabbitmq
      RP_AMQP_PASS: rabbitmq
      RP_AMQP_APIUSER: rabbitmq
      RP_AMQP_APIPASS: rabbitmq
      RP_AMQP_ANALYZER-VHOST: analyzer
      RP_BINARYSTORE_TYPE: minio
      RP_BINARYSTORE_MINIO_ENDPOINT: http://minio:9000
      RP_BINARYSTORE_MINIO_ACCESSKEY: minio
      RP_BINARYSTORE_MINIO_SECRETKEY: minio123
      LOGGING_LEVEL_ORG_HIBERNATE: "warn"
      LOGGING_LEVEL_COM_EPAM_TA_REPORTPORTAL_WS_CONTROLLER: "warn"
      LOGGING_LEVEL_COM_EPAM_TA_REPORTPORTAL_WS_RABBIT: "warn"
      RP_REQUESTLOGGING: "false"
      MANAGEMENT_HEALTH_ELASTICSEARCH_ENABLED: "false"
      JAVA_OPTS: -Xmx1g -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp  -Dcom.sun.management.jmxremote.rmi.port=12349 -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false  -Dcom.sun.management.jmxremote.port=9010 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=0.0.0.0
    labels:
      - "traefik.http.middlewares.api-strip-prefix.stripprefix.prefixes=/api"
      - "traefik.http.routers.api.middlewares=api-strip-prefix@docker"
      - "traefik.http.routers.api.rule=PathPrefix(`/api`)"
      - "traefik.http.routers.api.service=api"
      - "traefik.http.services.api.loadbalancer.server.port=8585"
      - "traefik.http.services.api.loadbalancer.server.scheme=http"
      - "traefik.expose=true"
    restart: always

Well, this does seems to work. I guess this is the one combination I haven't tried :-)
I have already applied some work-around in our filebeat configuration, which redirects the logs to our ELK stack, but obviously this service-logging-configuration solution is much more elegant, and what I was lokking for, so now that I confirmd it really works, I'll go ahead and apply it.
Thanks a lot for sharing, Vadym !

Try this two environment variables

      LOGGING_LEVEL_COM_EPAM_TA_REPORTPORTAL_WS_CONTROLLER: info
      LOGGING_LEVEL_COM_EPAM_TA_REPORTPORTAL_WS_RABBIT: info